Transcript for:
Introduction to Psychology Overview

Chapter 1 Introduction Chapter Introduction

Taste buds contained in the papillae of the tongue are far more responsive to bitter tastes than to sweet tastes.

Learning Objectives

  1. Identify the five in-depth perspectives of psychology and explain how integrating these perspectives leads to a more comprehensive and accurate view of behaviour and mental processes.
  2. Explain why issues of diversity and ethics are important to explore across all topics in psychology.
  3. Describe some reasons for and consequences of the lack of Indigenous representation in Canadian psychology.
  4. Analyze the contributions of philosophy and the natural sciences to modern psychology.
  5. Describe how early movements in psychology (both in Canada and across the world) are significant for modern psychology.
  6. Discuss the importance of the scientific method as a foundation for psychology.
  7. Explain why psychology’s role as a hub science supports applications in many academic fields, contributes to the solutions of critical contemporary problems, and informs the development of public policies.
  8. Studying the Science of Psychology can lead you to see yourself and other people in completely new ways. As human beings, we spend a great deal of time trying to figure out who we are and why we do the things we do. But our common-sense assumptions are often wrong. Hundreds of years ago, the world’s best doctors believed that there was nothing problematic with the practice of handling cadavers (e.g., doing autopsies on dead bodies) and then delivering babies, without washing one’s hands or medical instruments. No one knew why it was so common for new mothers to fall ill with “child bed fever.” It wasn’t until the careful scientific observations of Hungarian physician Ignaz Semmelweis in the 1840s that the link between hand-washing and infection rates was established. Even then, many people in the medical community held tightly to their belief that hand-washing procedures were unnecessary. Semmelweis’s findings flew in the face of their current understanding, and he was mocked for his suggestion that doctors engage in systematic hand-washing practices. Careful scientific research slowly dispelled these inaccurate notions. Nonetheless, we hold tightly to many equally false common-sense beliefs about the human mind and behaviour. We all “know” that opposites attract, but we also “know” that birds of a feather flock together—so why do we need psychology to tell us what we already “know”? The problem is that both statements cannot be true at the same time, so the real state of affairs is neither obvious nor simple. Just as those early doctors could not trust their own intuitions about hand-washing, we cannot rely on our own intuitions to provide an accurate or complete understanding of the human mind and behaviour.

Let’s begin with a seemingly simple and familiar example: our ability to taste. We know a lot about taste—what we like or dislike, the different qualities of taste, and so on. Most of us can taste sweetness in a solution made of 1 part sugar and 200 parts water. As remarkable as this sensitivity appears to be, however, people can detect 1 part bitter substance (like quinine or the chemicals in broccoli) in 2 million parts water. This contrast in taste sensitivity between sweet and bitter does not reflect the actual difference between sweet and bitter substances—that is, bitter tastes are not 10 000 times stronger than sweet tastes—but rather how we experience them. Why would we have such a vast difference in sensitivity between these types of tastes? Our personal experience of taste does not help us much in answering this question, but psychological science can. As it turns out, our greater sensitivity to bitter tastes is highly adaptive: Most poisons or toxins taste bitter, and if you want to stay alive, it is more important to avoid swallowing poison than to enjoy something sweet. Being far more sensitive to tastes that are bitter is a trait that has served our species well because it helps us avoid eating things that could kill us. Psychology helps us understand why we do the things we do by providing a context for understanding the mind and behaviour. To gain that understanding, psychology addresses questions from multiple scientific perspectives. One can think of this like the zoom feature in Google Earth. In some parts of this textbook, we will zoom in on human behaviours, like looking at the highly magnified image of the papillae on the tongue (pictured), which allow us to taste, and trace the messages about taste sent from the tongue to the brain. At other times, we’ll zoom out, to take in the larger picture and better understand why the boy on the previous page is giving his bitter-tasting broccoli a skeptical look. Psychologists approach the study of mind using various in-depth perspectives, which will be described in this chapter. For example, we can look at the little boy’s reaction to his broccoli from a developmental perspective, which tells us that taste sensitivity decreases over the life span. Using a biological perspective, we can determine the neural mechanisms responsible for the difference in taste sensitivity. Or, using the social perspective, we can think about social influences like culture on food preferences. Poutine (French fries topped with gravy and cheese curds), enjoyed by many Canadians, is viewed with raised eyebrows by many people from other parts of the world. Meanwhile, fried insects, a commonly enjoyed snack in Thailand and other places, are not likely to be found in a typical Canadian grocery chain. Although single perspectives can tell us a lot about a phenomenon like our sensitivity to bitter tastes, no one perspective can give us a complete answer. The best view comes from putting multiple perspectives together. You can learn a lot about your house by zooming in on it in Google Earth, but when you see how your home fits into the larger context of city, province, country, and planet, that viewpoint adds something special to your understanding. We’ll start by learning more about psychology’s main perspectives, along with a little background about their origins. Our approach to these perspectives is consistent with recent recommendations for teaching introductory psychology made by the American Psychological Association’s Board of Educational Affairs (Gurung et al., 2016). Once we understand these perspectives, we’ll be in a better position to understand how they come together to give us the big picture.

“The purpose of psychology is to give us a completely different idea of the things we know best.” French poet Paul Valéry, 1943

Introspection is the personal observation of our own thoughts, feelings, and behaviours. Because we are not perfect observers of the operations of our own minds, psychologists developed other methods that provide scientific insight into the mind. In this functional magnetic resonance imaging (fMRI) scan, areas of the brain that were more active when participants were hungry than when they were full are highlighted. Through technology, researchers can better understand how the brain regulates hunger. 1-1 What Is Psychology? The study of the mind is as fascinating as it is complex. Psychological scientists view the mind as a way of talking about the activities of the brain, including thought, emotion, and behaviour. A quick look at this textbook’s table of contents will show you the variety of approaches to mind that you will encounter, such as the thinking mind (cognitive psychology) and the troubled mind (abnormal psychology). The word psychology is a combination of two Greek words: psyche (or psuche), or “soul,” and logos, meaning “the objective study of.” For the ancient Greeks, a soul was close to our modern view of a spirit or mind. Logos is the source of all our “ologies,” such as biology and anthropology. Literally translated, therefore, psychology means “the objective study of the mind.” Contemporary definitions of psychology refine and update this basic meaning. Today’s psychologists define their field as the scientific study of behaviour, mental processes, and brain functions—that is, the scientific study of the mind. Increased recognition that the brain is the organ of the mind has led many psychology departments to expand their names to the Department of Psychological and Brain Sciences (or the equivalent) The phrase “behaviour, mental processes, and brain functions” has undergone several changes over the history of psychology. Behaviour refers to any action that we can observe. As we will see in Chapter 2, observation has been an important tool for psychologists from the early days of the discipline. Our definition does not specify whose behaviour is to be examined. Although the bulk of psychology focuses on human behaviour, animal behaviour has been an essential part of the discipline, both for understanding animals better and for comparing and contrasting animal and human behaviour. The study of both mental processes and brain functions has been highly dependent on the methods available to psychologists. Early efforts to study mental processes were generally unsatisfactory because they relied on the use of introspection, or the personal observation of your own thoughts, feelings, and behaviours. Because it is difficult for others to confirm your introspections, this subjective approach does not lend itself well to the scientific method. If you say that you are feeling hungry, how can anyone else know whether your observation is accurate? In addition, your mind and behaviour are governed by a host of structures, factors, and processes, most of which are not available through introspection. Innovations in the methods and mathematics used to investigate brain activity and behaviour have allowed psychologists to revisit the question of mental processes and brain functions with greater objectivity and success. As an additional example of how our personal introspections can be misleading, consider the following example: You are sitting in a lecture hall, attentively listening to your professor and taking notes in your notebook. One of the students seated in front of you is on a laptop, and in addition to taking notes, they are also engaged in a number of other tasks on their computer (e.g., checking the weather, doing a quick Google search). Do you think that this multitasking student would distract you? Would their behaviour impair your understanding of the lecture? When researchers conducted an experiment that mimicked this exact scenario, they found that the students in view of the multitasking peer reported that their learning was “barely hindered” by the multitasking student. They thought it was no big deal (just as you perhaps are thinking). Yet when the researchers gave them a comprehension quiz on the lecture, the students who had been in view of the multitasking peer scored 17 percent lower on the test compared to an equivalent group of students who had not been in view of a multitasking peer during the lecture (Sana, Weston, & Cepeda, 2013). Unless these students happened to believe that a 17 percent grade drop is no big deal (which seems unlikely!), there is an obvious disconnect here between the students’ beliefs regarding the impact of the multitasking peer on their learning and its actual impact. Many studies have found similar discrepancies between students’ “feelings” of learning and their actual learning (e.g., Glass & Kang, 2018; Ward et al., 2017). Psychology as a Hub Science Why Is Psychology a Hub Science? Most Readers of this Book are not pursuing careers in psychology, so how will this material help you in your chosen career? Psychology is all about people, and nearly all occupations require an understanding of people and their behaviour. An architect cannot design a functional space without considering how people respond to being crowded. A lawyer cannot cross-examine a witness without an understanding of memory, motivation, emotion, and stress. A teacher cannot encourage students to reach their potential without an understanding of child development and learning. Business leaders and economists cannot predict the movements of markets without understanding the minds making the relevant decisions. The study of psychology, then, provides you with better insight into and understanding of many occupations and fields of study. You probably have seen applications that allow you to map your friendship networks on social media, with shorter links indicating greater connectivity and larger bubbles indicating more overlapping friendships with another person. Kevin Boyack and his colleagues generated a similar map of the sciences (see Figure 1.1) but used reference lists in journal articles instead of friendship networks (Boyack, Klavans, & Börner, 2005). The resulting map shows the extent to which each of the sciences are influential and what other sciences they most influence. Boyack and colleagues referred to the most influential sciences as hub sciences. Their analysis shows that psychology is one of the seven major hub sciences, with strong connections to the medical sciences, the social sciences, and education. In the upcoming chapters of this book, we will highlight these connections with examples that are relevant to each particular chapter. Figure 1.1 Psychology as a Hub Science. This map of science was generated by comparing citations from more than 1 million papers published in more than 7000 journals since 2000. Psychology appears among the seven major areas of science, indicated in the map by a different font. The other six major areas are social sciences, mathematics, physics, chemistry, earth sciences, and medicine.

Source: Adapted from “Mapping the Backbone of Science,” by K. W. Boyack et al., 2005, Scientometrics, 64(3), 351–374. With kind permission from Springer Science+Business Media.

1-2 What Are Psychology’s Roots? Psychology is a relatively young discipline, dating back only to the 1870s. However, topics that interest modern psychologists go back farther in the history of human thought. People living as long ago as 6000 to 5000 bce in Assyria described their dreams (Restak, 1988). Among these accounts are descriptions of being chased, which are still among the most common dreams that people experience (Nielsen et al., 2003). See Figure 1.2 for common dream themes.

The psychology family tree includes two major roots: philosophy and the natural sciences. Psychologists answer questions traditionally posed by philosophers by borrowing the methods of the natural sciences. We examine scientific methods in detail in Chapter 2.

Many People Report Dreams with the Same Themes. Although we don’t understand why we dream about certain things, many people report similar themes in their dreams.

Marcos Mesa Sam Wordley/ Shutterstock.com; Source: Adapted from “Typical Dreams of Canadian University Students,” by T. A. Nielsen et al., 2003, Dreaming, 13, 211–235.

1-2a Psychology’s Philosophical Roots Philosophers and psychologists share an interest in questions regarding the nature of the self, the effects of early experience, the existence of free will, and the origin of knowledge. Both disciplines consider the relative balance of biological factors (nature) and environmental factors (nurture) in the resulting human behaviour. Both attempt to determine the relationships between self-interest and community welfare, between body and mind, and between humans and other species with which we share the planet. Although we typically consider questions of the unconscious mind and abnormal behaviour to be the realm of the psychologist, philosophers investigated these issues thousands of years before the first psychologist was born.

1-2b Psychology’s Natural Sciences Roots Running along a parallel track to the early philosophers, ancient physicians were laying the foundation of our biological knowledge of the brain and nervous system, discussed in greater detail in Chapter 4. During this pursuit, physicians helped develop the scientific methods that would become central to contemporary psychology and previewed the application of the knowledge that they gained to the improvement of individual well-being. Until fairly recently, the whole of medicine remained a primitive business. Beginning in the 17th and 18th centuries, scientists armed with new technologies, including the light microscope (see Figure 1.3), began to make a series of important discoveries about the human body and mind. For example, they demonstrated that a single sensory nerve carried one type of information instead of multiple types. You might have already duplicated this research yourself while rubbing your sleepy eyes—you see a flash of light. The nerves serving the retina of the eye do not know how to process information about touch or pressure. When stimulated, they are capable of only one type of message—light. Hermann von Helmholtz (1821–1894) asked his participants to push a button when they felt a touch. When a thigh was touched, participants reacted faster than when a toe was touched. Because the toe is farther from the brain than the thigh, signals from the toe required more time to reach the brain. These types of discoveries about the physical aspects of mind convinced scientists that the mind was not supernatural and could be studied scientifically. Philosophers began to incorporate physiological and psychological concepts into their work, and natural scientists began to explore the questions asked by philosophers. The gradual merger of these approaches resulted in a series of experiments that looked increasingly like contemporary psychology. Scientists began to ask questions about the relationships between physical stimulation and its resulting sensations. For example, Gustav Fechner (1801–1889) was able to identify the softest sound that a person could hear by randomly presenting sounds of different intensities to which a participant would respond “yes” or “no.” When the “yes” responses reached 50 percent, Fechner concluded that the sound was within the range that the human ear could detect (see Chapter 5). Although Fechner’s research seems very similar to Helmholtz’s, note the importance of “mental processes” in Fechner’s work, as opposed to the simple measurement of physiology in Helmholtz’s experiment. The stage was set for a modern science of psychology. One of the most significant questions shared by philosophy and psychology asks whether the mind is inborn or is formed through experience. (a) Philosophers beginning with Aristotle (384–322 bce) believed that all knowledge is gained through sensory experience. (b) Beginning in the 17th century, this idea flourished in the British philosophical school of empiricism. Empiricists, like John Locke, viewed the mind as a “blank slate” at birth, which then was filled with ideas gained by observing the world. (c) Contemporary psychologists believe that experience interacts with inborn characteristics to shape the mind. Intelligence, for example, is influenced by both genetics and experience. During the 1970s, Romanian orphans adopted at young ages recovered from the effects of their seriously deprived social circumstances, but those who endured years of deprivation had more severe cognitive deficits (Ames, 1997; Wade et al., 2018).

Enlarge Image

www.BibleLandPictures.com/Alamy Stock Photo; Georgios Kollidas/ Shutterstock.com; Cynthia Johnson/Getty Images

Ancient people might have attempted to cure headaches, seizures, or psychological disorders by drilling holes in the skull. Bone growth around the hole indicates that some patients survived the procedure.

PRISMA ARCHIVO/Alamy Stock Photo

Microscopes Changed the World of Science. This light microscope was used by Anton von Leeuwenhoek to discover red blood cells in 1676. Microscopes opened a new world to scientists interested in living things. A = Screw for adjusting the height of the object being examined B = Metal plate serving as the body C = Skewer to impale the object and rotate it D = Lens, which was spherical

Mary Evans Picture Library/The Image Works; World History/Topham/The Image Works

The work of Hermann von Helmholtz (1821–1894) on reaction time helped establish the mind as something that could be studied scientifically.

bilwissedition Ltd. & Co. KG/Alamy Stock Photo

Highlights in the Philosophical and Scientific Roots of Psychology

Person or group
Things to remember
   

www.BibleLandPictures.com/Alamy Stock Photo Ancient Greek philosophers Observations can be accounted for by natural, not supernatural, explanations.

Georgios Kollidas/ Shutterstock.com British empiricists Knowledge is the result of experience.

PRISMA ARCHIVO/Alamy Stock Photo Ancient physicians The brain is the source of the mind.

Mary Evans Picture Library/The Image Works, World History/Topham/The Image Works 17th- and 18th-century natural scientists Discoveries about sensation and movement showed that the mind was physical.

bilwissedition Ltd. & Co. KG/Alamy Stock Photo Hermann von Helmholtz Studies of reaction time reinforced the idea of the mind as physical.

1-3 How Did the Science of Psychology Begin? As psychology developed from the gradual merger of philosophical questions and scientific reasoning, the young discipline struggled to determine which questions and methods were best suited to its goals. Lively debates arose among psychologists who helped to shape the field. Now we will review some of the key figures and perspectives from the history of psychology. A timeline of key milestones in the history of psychology is provided at the end of this section (see Figure 1.7). And while it might be tempting to skip over some of this history (after all, you’re taking intro psych now, who cares what happened 150 years ago?), taking some time to review the history of the discipline will provide you with a better understanding of contemporary psychology, and many of the names that appear in this chapter will come up again in later chapters. In this sense, much of the material presented here serves as a preview of things to come. Wilhelm Wundt (1832–1920), seated in this photo, is considered the first experimental psychologist.

INTERFOTO/Alamy Stock Photo

Connecting to Research The First Official Psychology Experiment We have given Credit to Wilhelm Wundt for conducting the first experiments in psychology. What did those crucial first experiments look like? Wundt’s experiments reflected both his interests in consciousness and his training as a medical doctor. He was aware of methods that were used by researchers in physiology, such as the reaction-time measures pioneered by F. C. Donders in the Netherlands, and he sought to apply these methods to measure psychological processes such as attention and decision making (Danziger & Ballantyne, 1997). The Questions: Is it possible to “time” mental processes? Are simple reaction times different from reaction times involving choices? Methods Wundt’s methods involved two sets of apparatus: one that would deliver a stimulus precisely to a participant, and a second that would measure and record the participant’s responses. His imposing-looking brass instruments used to carry out these tasks were displayed to an admiring public at the 1893 Chicago World’s Fair. The first experiments carried out by Wundt involved the presentation of stimuli, such as the sound of a ball dropped onto a platform, and measurements of reaction time, as indicated by the participant pressing a telegraph key. In addition to these simple reaction-time experiments, Wundt asked participants to make decisions: When you see this light, press the button on the left, but if you see that light, press the button on the right. Ethics As you continue reading your textbook, you will review a number of experiments like this one. Many will highlight important ethical considerations regarding the treatment of participants. These ethical concerns will be reviewed in more detail in Chapter 2, but in the meantime, Wundt’s experiment appears to have posed little risk to his participants. After you consider the criteria for conducting ethical research outlined in Chapter 2, however, you might want to return to this description and see if you agree with our assessment or not. Results Wundt viewed reaction time as “mental chronometry” (Hergenhahn & Henley, 2013, p. 255). In other words, he believed that reaction time provided a measure of the amount of mental processing required to carry out a task. As his tasks became more complex, reaction time increased accordingly. Conclusions As mentioned earlier in this chapter, Wundt’s mentor, Hermann von Helmholtz, had performed a number of experiments similar to those performed by Wundt. Von Helmholtz touched the participant on the thigh and toe and discovered that the participant pushed a button faster in response to the thigh touch than the toe touch. What makes von Helmholtz’s demonstration a physiological experiment and Wundt’s a psychology experiment? Part of the answer is the interpretation that each scientist made of his results. For von Helmholtz, differences in reaction time in these two instances represented the effects of the speed of conduction of neural signalling. Because the toe is farther from the brain than the thigh is, messages from the toe take more time to reach the brain. Wundt’s simple reaction-time experiments were not that different, but his experiments on choice were more clearly psychological. As decisions became more complex, reaction time increased.

1-3a Wilhelm Wundt and Voluntarism The credit for being the first psychologist goes to Wilhelm Wundt (1832–1920), a former research assistant to von Helmholtz. Wundt was the first to believe that conscious experience could be studied scientifically and he conducted the first documented psychological experiment in his laboratory at the University of Leipzig in 1879. This landmark experiment was a simple test of reaction time: How quickly after hearing a ball drop onto a platform could a person respond by striking a telegraph key? Importantly, Wundt believed that humans were capable of deciding what to attend to and thus what is perceived clearly. His approach to psychology was known as voluntarism, which reflects this emphasis on conscious will and choice.

1-3b Structuralism Wundt saw mental experience as a hierarchy. The mind constructs an overall perception (e.g., “the food I’m eating tastes good”) out of building blocks made up of separate sensations, such as taste and vision, and emotional responses. One of Wundt’s students, Edward Titchener (1867–1923), expanded on Wundt’s views to establish a theory of structuralism, in which the mind could be broken down into the smallest elements of mental experience. Titchener’s approach to psychology paralleled the general trends in the physical sciences of his day, such as efforts in chemistry to break molecules into elements and attempts by physicists to describe matter at the level of the atom. Specifically, Titchener believed that consciousness experience could be broken down into three types of mental elements: sensations, images, and feelings. Each of these could then be broken down further into their fundamental properties. Upon completion of his PhD in 1892, Titchener accepted a position at Cornell University in Ithaca, New York, and went on to create the largest (at the time) doctoral program in the United States. John Wallace Baird (1869–1919), who was born and raised in southwestern Ontario and obtained his undergraduate degree from the University of Toronto, spent a brief period at the University of Leipzig training under Wundt, before moving to Cornell University and completing his PhD under the supervision of Titchener. In 1918, during the midst of World War I, Baird was elected president of the American Psychological Association. During this time, he developed a program for the evaluation of army recruits that would serve as the first case of mass psychological testing anywhere in the world. Baird died of postsurgical complications in 1919. The impact of this early Canadian psychologist has in large part been forgotten, probably due in part to the devaluation of the structuralist approach (Lahham & Green, 2013). Edward Titchener (1867–1923) was a student of Wundt’s who developed an approach to psychology known as structuralism.

Fotosearch/Stringer/Getty Images

1-3c Gestalt Psychology The structuralists’ effort to break behaviour into its essential elements was rejected by a group of early 20th-century German psychologists, including Kurt Koffka, Max Wertheimer, and Wolfgang Köhler, who founded Gestalt psychology. Gestalt, although lacking a clear translation into English, basically means “form” or “whole.” The Gestalt psychologists believed that breaking a “whole” perception into its building blocks, as advocated by the structuralists, would result in the loss of some important psychological information. For example, look at the middle image in Figure 1.4. It is the same in both the top and the bottom rows, yet in the context of the first row, most people interpret the image as the letter B, and in the context of the bottom row, the image looks like the number 13. The structuralists would have a difficult time explaining why the same visual building blocks could lead to such different conclusions. Gestalt Psychologists Challenged Structuralism. Participants usually see the middle figure as a B when instructed to look at the first row, but see a 13 when instructed to read the second row, even though the images are exactly the same. Structuralists, who believed that experiences could be reduced to small building blocks, would have difficulty explaining these results. In contrast, Gestalt psychologists, who emphasized the role of context or the “whole” in perception, would have no problem.

Max Wertheimer (1880–1943) was one of the founders of Gestalt psychology.

Bettmann/CORBIS

1-3d William James and Functionalism While the structuralists and Gestalt psychologists continued their debate, a new type of psychology emerged, partly in response to the publication of Charles Darwin’s The Origin of Species in 1859 and The Descent of Man in 1871. Functionalism viewed behaviour as purposeful because it led to survival. Instead of restricting themselves to exploring the structure of the mind, functionalists were more interested in why behaviour and mental processes worked in a particular way. Functionalism’s chief proponent was William James (1842–1910), whose textbook, Principles of Psychology (1890), dominated the field of psychology for 50 years. There are few topics in psychology that James did not address in his book, and many of his ideas sound modern. For example, he coined the term stream of consciousness to describe the flow of ideas that people experience while awake. Throughout his discussions of mental processes and behaviour, James emphasized the role of evolution. For the functionalist, the value of an activity depended on its consequences. If we enjoy ice cream, it must be because eating sweet, high-fat foods enhances survival—at least it did for our ancestors, for whom famine was more of a problem than obesity. While the structuralists were interested in describing conscious experience, the functionalists were more interested in explaining why we had such experiences. It is difficult to overestimate the impact of James on psychology. Structuralism came and went, but all contemporary psychologists are functionalists at heart. As described by two psychology historians, “As a systematic point of view, functionalism was an overwhelming success, but largely because of this success it is no longer a distinct school of psychology. It was absorbed into the mainstream psychology. No happier fate could await any psychological point of view” (Chaplin & Krawiec, 1979, p. 53). G. Stanley Hall (1844–1924), who studied under both Wundt and James, established the first psychology research laboratory in North America at Johns Hopkins University in 1881. Hall was heavily influenced by evolutionary theory and his research focused on the development and education of children and adolescents. In 1892, he became the first president of the American Psychological Association. William James (1842–1910) proposed functionalism, an approach to the mind that viewed behaviour as purposeful.

Mary Evans Picture Library/The Image Works

Mary Whiton Calkins (1863–1930) was a student of William James at Harvard University, although she could not officially register because of her gender. She studied memory and the self and served as president of the American Psychological Association in 1905. In 1906, she wrote an article arguing for a reconciliation between the structuralist and functionalist approaches to psychology, contending that they were both concerned with understanding consciousness and should not be viewed as incompatible with each other (Calkins, 1906).

Courtesy of the Wellesley College Archives

1-3e Early Psychology in Canada James Mark Baldwin (1861–1934) founded the first psychology laboratory in the British Commonwealth at the University of Toronto in 1891. Baldwin himself was not Canadian, and his appointment to the university, along with his experimental approach to studying psychology, was met with criticism on multiple fronts (Hoff, 1992). While Baldwin did not stay at the University of Toronto for long (in 1893 he returned to Princeton), the lab that he had established at the University of Toronto continued to develop and expand, first under the direction of August Kirschmann (1860–1932), who came from Wundt’s laboratory in Germany, and then by Edward Alexander Bott (1887–1974), who established psychology as an independent department at the university in 1926 (Myers, 1982). During his time in Canada, Baldwin had two daughters, whose birth sparked an interest in child development. Along with G. Stanley Hall, Baldwin became one of the first development psychologists, and his work would go on to inspire the likes of Jean Piaget and Lawrence Kolhberg, whom you will read more about in Chapter 11. Psychology as a discipline was slower to develop in Canada compared to the United States, and during the early part of the 20th century, Canadian psychologists often relied on American institutions for support and funding. As mentioned previously, Canadian John Wallace Baird became the president of the American Psychological Association (APA) in 1918. Many Canadian psychologists were involved with the APA during this time, and this is a trend that continues today (which is not surprising, given the close proximity of the two countries). The Canadian Psychological Association (CPA) was established in Ottawa in 1939, and its early focus was on aiding the Canadian war effort during World War II—for example, by developing tests that would help in the recruitment of soldiers (Conway, 2010). The current mandate of the CPA includes improving the health and wellness of all Canadians, and promoting excellence in psychological research, education, and practice. One of the first classes in experimental psychology at the University of Toronto, 1890.

Courtesy of the Psychology Department Museum, University of Toronto

Emma Sophia Baker (1856–1943) was a student of August Kirshmann and was the first Canadian woman to complete her doctoral dissertation on a psychological topic (colour perception and aesthetics). She was awarded her PhD from the University of Toronto in 1903.

Courtesy of Mount Allison University Archives

1-3f Clinical Roots: Freud and the Humanistic Psychologists With the exception of occasional bursts of insight from the ancient Egyptians and Greeks, the most common view of psychological disorders over the course of history has been the supernatural approach. According to this view, psychological disorders resulted from the actions of evil spirits or other external, magical forces. Between the 17th and the 19th centuries, supernatural explanations for psychological disorders began to give way to two scientific approaches: a medical model and a psychological model. The medical model of psychological disorders emphasized physical causes of abnormal behaviour and medical treatments, such as medication. The psychological model suggested that abnormal behaviour can result from life experiences, leading to fear, anxiety, and other counterproductive emotional responses. Psychological treatments take many forms, from offering support to applying cognitive and behavioural methods to help people think and solve problems in new ways. As Chapters 14 and 15 will explain, contemporary psychologists typically combine these approaches to understand disorders and develop effective treatments. For example, we know that feeling depressed has both physical components (changes in the activity of chemical messengers in the brain) and experiential components (exposure to stressful situations). Treatment for depression often combines medication with efforts to change the way a person thinks about situations. Sigmund Freud When a case of capital punishment is discussed, we often hear about the prisoner’s terrible childhood from one side of the argument and the need to protect society from further misdeeds by this person from the other side. Where would the Freudians and humanistic psychologists line up in this debate? Sigmund Freud (1856–1939) built a bridge from his medical training as a physician to his belief in the impact of life experiences on behaviour. His psychodynamic theory and its applications to the treatment of psychological disorders dominated much of psychological thinking for the first half of the 20th century. Freud’s ideas about the existence of the unconscious mind, the development of sexuality, dream analysis, and psychological roots of abnormal behaviour influenced not just psychology, but also culture. He nearly single-handedly founded the study of personality in psychology, a topic explored more fully in Chapter 12. He developed the techniques of psychoanalysis for treating mental disorders, which are discussed in Chapter 15. He popularized the use of psychological principles for explaining everyday behaviour, and his theories are as likely to be covered in an English literature course as in a psychology course. Our enthusiasm for Freud is tempered by a number of valid concerns. As you read further about Freud, keep in mind that his methods were not scientific. His theories are based on observations of his patients, primarily upper-class Viennese housewives who were not typical of the human population. Freud’s theories do not lend themselves to experimentation, an essential requirement for any scientific theory, as discussed further in Chapter 2. For example, how could you design an experiment to demonstrate that dreaming about water indicates you have unconscious concerns about sex? Finally, although psychoanalysis is still used on occasion as a therapy technique, it is rarely conducted in the strict Freudian manner. Other techniques, discussed in Chapter 15, exceed psychoanalysis in effectiveness and popularity among contemporary therapists.

Humanistic Psychology By the 1960s, American psychology was dominated by behaviourism (discussed in a later section of this chapter) on one side and Freud’s theories on the other. Structuralism had fallen into disfavour, and functionalism and Gestalt psychology were no longer distinct schools of thought. Just as other aspects of American culture at this time began to feature rebelliousness against current ways of thinking, some psychologists began to push against the restrictions of psychodynamic theory. Many of these disenchanted psychologists had been trained in psychoanalysis but were not seeing the results they desired. This dissatisfaction with prevailing views led these psychologists to propose new ways of thinking about the human mind through an approach known as humanistic psychology. Experiencing Psychology Testing Reaction Time You have read about a number of reaction-time experiments in this chapter, including those conducted by Hermann von Helmholtz and Wilhelm Wundt. It is possible to conduct similar experiments without the brass equipment used by these early researchers. This exercise, developed by Dr. Erik Chudler of the University of Washington, is designed to measure your reaction time to a visual stimulus. All you need is a partner and a simple foot-long ruler. Hold the ruler vertically, with the highest numbers at the top, and ask your partner to place a hand at the bottom of the ruler without touching it. Tell your partner that you will drop the ruler sometime in the next five seconds and to grab the ruler as quickly as possible and hold it. Note the number at the top of the person’s hand after the ruler is caught, and use this chart to convert your results to reaction time.

Number on ruler at top of person’s hand Reaction time 2 in. (about 5 cm) 0.10 s (100 ms) 4 in. (about 10 cm) 0.14 s (140 ms) 6 in. (about 15 cm) 0.17 s (170 ms) 8 in. (about 20 cm) 0.20 s (200 ms) 10 in. (about 25.5 cm) 0.23 s (230 ms) 12 in. (about 30.5 cm) 0.25 s (250 ms) 17 in. (about 43 cm) 0.30 s (300 ms) 24 in. (about 61 cm) 0.35 s (350 ms) 31 in. (about 79 cm) 0.40 s (400 ms) 39 in. (about 99 cm) 0.45 s (450 ms) 48 in. (about 123 cm) 0.50 s (500 ms) 69 in. (about 175 cm) 0.60 s (600 ms) Enlarge Table

Test your participant five times and average the response times. Wundt eventually discarded reaction time as a measure because he became so frustrated with the variability that he observed among participants and across tasks (Hergenhahn & Henley, 2013), but you can have some fun exploring these same sources of variability. Are you faster than your friends? Try testing people who are older than you. Are they faster or slower? Do your participants improve with practice? What happens to your reaction time if you dim the lights? Measuring the spot where a person catches a falling ruler gives you a rough estimate of that individual’s reaction time to a visual stimulus.

Roger Freberg

Freud, James, and the behaviourists all believed that human behaviour was on a continuum with animal behaviour, which led to their assumption that humans naturally shared the aggressive impulses of animals. For Freud in particular, society had a civilizing function on the otherwise selfish and aggressive human. In contrast, the humanistic psychologists extended the philosophy of Jean-Jacques Rousseau and other 18th-century Romantic philosophers into a belief that people are innately good, are motivated to improve themselves, and behave badly only when corrupted by society. Instead of focusing on what went wrong in people’s lives, humanistic psychologist Abraham Maslow (1908–1970) asked interesting questions about what made a person “good.” Maslow introduced a major theory of motivation, which is described in more detail in Chapter 7. As Chapter 16 will show, Maslow’s emphasis on what is good about people, as opposed to Freud’s focus on what goes wrong with people, re-emerged in the form of contemporary positive psychology. Humanistic therapists rebelled against Freudian approaches to treatment. As described in more detail in Chapter 15, one humanistic therapist, Carl Rogers (1902–1987), developed a new approach to therapy called client-centred therapy. In this type of therapy, the people receiving treatment are called clients rather than patients, reflecting their equal standing with the therapist and their active role in the therapy process. Humanistic approaches to therapy have also influenced communication, group process, parenting, and politics. The emphasis on active listening and the use of “I hear what you’re saying” reflections have become nearly cliché in courses of leadership training and interpersonal communication. Advice to parents to provide unconditional love to their children is a direct application of humanistic beliefs, which are discussed in more detail in Chapter 11. Finally, humanistic psychology continues to flavour our political and social domains. When issues such as capital punishment arise, the humanistic contention that there are no bad people, just bad societies that fail people, typically appears as part of the debate. The work of Sigmund Freud (1856–1939) on consciousness, sexuality, abnormal behaviour, and psychotherapy played a dominant role in psychology during the first half of the 20th century.

/Newscom/akg-images/

Abraham Maslow (1908–1970) contributed a theory of motivation and ideas about exceptional people to the growing humanistic psychology movement.

Ann Kaplan/CORBIS

Prior to advances in psychological science, people with psychological disorders were subjected to bizarre “treatments,” such as this 18th-century spinning device intended to calm patients.

Everett Collection Inc/Alamy Stock Photo

Humanistic therapists, like Carl Rogers (1902–1987), often rebelled against Freudian approaches to therapy. For example, Rogers (in the white shirt leading a group therapy session) referred to people as clients rather than patients, the term that Freud used.

Michael Rougier/Getty Images

In 1920, Francis Cecil Sumner (1895–1954) became the first African American to receive a doctorate in psychology for his work on psychoanalysis. Sumner’s later work focused on religion and racism.

Courtesy of the Moorland-Spingarn Research Center, Howard University Archives; Archives of the History of American Psychology, The Center for the History of Psychology, The University of Akron

1-3g The Behaviourists Beginning at the dawn of the 20th century, the concepts of “mental processes” and “brain function” in our definition of psychology took a back seat to observable behaviour for the better part of the next 50 years because psychologists following the approach of behaviourism concentrated on observable, measurable behaviours. As part of their effort to measure behaviour carefully, many behaviourists restricted their research to studies using animals. Armed with Darwin’s evidence linking humans to animals, the behaviourists comfortably drew parallels between their observations of animals and their assumptions about human behaviour. In particular, behaviourists were fascinated by learning, which is examined in depth in Chapter 8. Ivan Petrovich Pavlov (1849–1936) had a particularly significant impact on behaviourism and psychology. While studying digestion in dogs, he realized that the dogs’ salivation in response to the arrival of the handler or to being harnessed for an experiment, rather than just to the food itself, indicated that the dogs had associated, or linked, these signals with the arrival of food. The dogs’ ability to use this learned association to anticipate important future events was a remarkable advantage in terms of survival. This type of learning is now called classical or Pavlovian conditioning, which will be covered in detail in Chapter 8. The Freudians and humanistic psychologists had conflicting views on human nature, with the Freudians believing that we are naturally selfish and aggressive and the humanistic psychologists believing that we are naturally good. These philosophical differences continue to colour our discussions of topics: Is a criminal a “bad” person who was never properly socialized or a “good” person who was corrupted?

Bill Fritsch/Getty Images

Psychology textbooks would spend little time on Pavlov if his research applied only to salivating dogs. Although classical conditioning occurs in rather primitive organisms, including fruit flies, snails, and slugs, it also occurs quite frequently in humans. Many of our emotional responses associated with environmental cues are the result of this type of learning. If you feel especially anxious prior to taking an exam, you can thank classical conditioning. If you are repulsed by the idea of eating a food that you once consumed just before becoming ill, this is again a likely result of classical conditioning.

While studying digestion, Ivan Petrovich Pavlov (1849–1936) realized that his dogs could learn that certain signals meant food was on the way.

Sovfoto/Getty Images; Mark Stivers

Classical conditioning helps us understand the links that we make between environmental cues and our emotions. If a soldier associated the smell of diesel fuel with traumatic experiences, smelling diesel fuel at a gas station back home could trigger distress.

NOOR KHAN/AP Images

John B. Watson (1878–1958) began experimenting with learning in rats and independently came to many of the same conclusions as Pavlov. Watson echoed the blank-slate approach of the British empiricist philosophers in his emphasis on the role of experience in forming human behaviour. Later in his career, Watson applied his understanding of behaviour to the budding American advertising industry. By 1930, he was earning $70 000 per year as an advertising executive—an astronomical salary for the time, much higher than the $3000 per year he earned as a professor. After discovering that blindfolded participants couldn’t tell the difference between brands of cigarettes, Watson concluded that to be successful, a product must be associated with an appealing image. The advertising industry was never the same, and today’s advertisers continue to apply Watson’s principles.

John B. Watson (left, 1878–1958) was a strong believer in the blank-slate approach of the earlier empiricist philosophers. After working as a psychology professor, he applied his knowledge of human behaviour to advertising with great success. Watson believed that a product would sell better if it were paired with an appealing image. His ideas are still used by advertisers today. For instance, a Justin Bieber–inspired collection of nail polish sold over 1 million bottles within the first two months of its launch.

Ferdinand Hamburger Archives, Sheridan Libraries, Johns Hopkins University; Photo by Philip Ramey/Corbis via Getty Images Photo by May Tse/South China Morning Post via Getty Images

Watson’s legacy in psychology was enormous. He restricted psychology to the study of observable behaviour. As will be established in Chapter 2 and throughout this text, even psychologists who are interested in internal events, like the visual recognition of an object, seek related observable behaviours, such as brain images or reaction time. Like Pavlov, Watson approached psychology with a focus on the relationships between environmental cues and behaviour. Other behaviourists were more interested in the effects of consequences on behaviour, an idea that was derived from basic functionalism. Edward Thorndike (1874–1949) proposed the law of effect, which suggested that behaviours followed by pleasant or helpful outcomes would be more likely to occur in the future, whereas behaviours followed by unpleasant or harmful outcomes would be less likely to occur. He based his law on observations of cats’ behaviour in a puzzle box he had constructed (see Figure 1.5). To escape the box, a cat was required to complete a sequence of behaviours. Through trial-and-error learning, the cat would escape faster and faster on successive trials. In other words, the cat repeated effective behaviours and abandoned ineffective ones. Thorndike’s Law of Effect Emerged from Observations of Cats. If you own a cat, you probably know that cats don’t like to be enclosed in boxes. Edward Thorndike (1874–1949) studied the escape strategies of a cat to build his law of effect.

© Cengage Learning

Like Thorndike, B. F. Skinner (1904–1990) was interested in the effects of consequences on how frequently behaviours were performed. Skinner shared Watson’s belief that psychology did not benefit from consideration of consciousness or internal mental states (see Figure 1.6). He believed that inner, private states such as thinking and feeling existed, but he viewed them as behaviours that followed the same rules as public behaviours, like driving a car (Jensen & Burgess, 1997). He not only reduced his study of behaviour to the actions of rats and pigeons in adapted cages that came to be known as Skinner boxes, but he also was comfortable generalizing from the behaviour of rats and pigeons to complex human behaviours. Despite its strong focus on a limited set of animals and situations, Skinner’s behaviourism has provided a wealth of beneficial applications. Smokers attempting to quit, doctors and nurses engaging in self-paced continuing education courses, and children receiving treatment for autism spectrum disorder are all likely to be benefiting from Skinner’s efforts.

Behaviourism Set the Stage for Behavioural Neuroscience. Strict behaviourists often referred to a “black box” model, in which stimuli enter and responses exit, but you don’t need to know much about what the box is doing to the data. When psychologists began substituting what they learned about the brain for the inner workings of the black box, this led to the development of the biological psychology perspective, also known as behavioural neuroscience.

Eraxion/iStock/Getty Images

B. F. Skinner (1904–1990), shown here with the apparatus that bears his name—the Skinner box, was interested in the effects of reward and punishment on future behaviour.

Nina Leen/Getty Images

1-3h The Cognitive Revolution By the 1950s, the behaviourists’ lack of interest in mental states and activity was challenged by scientists from diverse fields, including linguistics and computer science, leading to a cognitive revolution. Cognition covers the private and internal mental processes that the behaviourists avoided studying—information processing, thinking, reasoning, and problem solving. Ulric Neisser (1928–2012) gave the new field its name in his 1967 book, Cognitive Psychology. Ulric Neisser (1928–2012) contributed the term cognition to the emerging field that studied information processing, thinking, reasoning, and problem solving.

Courtesy of Cornell University Breakthroughs in computer technology allowed these new cognitive psychologists to use mathematical and computer models to illuminate the mental processes leading to observable behaviours. Alan Newell (1927–1992) and Herbert Simon (1916–2001) wrote groundbreaking artificial intelligence programs using human information processing as their model. Chapter 10 will explore the contributions of cognitive psychologists in more detail. Far from relying on the unreliable introspective methods of the structuralists, the cognitive psychologists developed rigorous and objective methods of quantifying internal cognitive processes, aided in part by the advancement of computer technology. By the 1980s, most university psychology departments were offering courses in cognition. Computers were named after the job title of the women who did most computation tasks before the machines were invented, and who continued to operate them. Although these early computers were less powerful than your cellphone (not to mention more expensive), their operation gave psychologists new ideas about how the mind might process information.

NARA/Science Source Cognitive Neuroscience Cognitive Neuroscience In 1934, Wilder Penfield (1891–1976) founded the Montreal Neurological Institute and he served as its director until 1960. Penfield, a neurosurgeon, pioneered the surgical treatment of epilepsy, and is responsible for creating the first detailed functional maps of the human brain. For example, Penfield’s surgical approaches (which involved stimulating different areas of the brain while the patient was awake) enabled him to map the sensory and motor cortices, as we will see in Chapter 4. In 1994, Penfield was inducted into the Canadian Medical Hall of Fame. While Penfield was a neurosurgeon, focused on identifying the function of various parts of the brain, Donald Hebb (1904–1985) was interested in studying the psychological effects of Penfield’s surgical treatments. Born in Nova Scotia, Hebb attended university at Dalhousie before obtaining his master’s degree from McGill and his PhD from Harvard. In 1937, Hebb returned to Montreal and worked alongside Penfield in order to better understand the connection between the brain and behaviour. In 1949 he published The Organization of Behavior, a landmark book that laid out Hebb’s research and theory regarding the neural mechanisms behind learning and memory. Known as “Hebb’s rule,” his most important contribution is often summarized as the phrase “neurons that fire together, wire together.” Chapter 9 explores the neurobiology of memory in more depth. Figure 1.7 Milestones in the History of Psychology.

Enlarge Image

INTERFOTO/Alamy Stock Photo; LOC/Science Source; Historic Collection/Alamy Stock Photo; Courtesy of Mount Allison University Archives; Courtesy of the Wellesley College Archives; Library of Congress Prints And Photographs Division [LC-USZ62-117329]; Clark University Archives; Courtesy of the Moorland-Spingarn Research Center, Howard University Archives; Farrell Grehan/Historical/Corbis; Nina Leen/Getty Images; McGill University Archives, PR041564; Eva Blue/Wikipedia; Photo by Boris Spremo/Toronto Star via Getty Images; H.S. Photos/Alamy Stock Photo One of Hebb’s students was Brenda Milner (1918–), a prolific Canadian psychologist who has been described as the founder of neuropsychology. Milner began her lengthy career by working alongside Hebb and Penfield, and in September 2018 the Montreal Neurological Institute hosted a symposium honouring Milner (who turned 100 in June 2018) and celebrating her long list of accomplishments. Milner is most well-known for her research examining the contributions of the temporal lobes to memory processing, including her work done with a patient known as H.M., which is covered in Chapter 9. The foundational discoveries of Milner set the stage for what is now the flourishing field of cognitive neuroscience. Brenda Milner (1918–) is a pioneering Canadian researcher who has made many important discoveries regarding the brain and long-term memory.

Eva Blue/Wikipedia Summary 1.2 Pioneering Approaches to Psychology

Foundation of psychology
Things to remember
Wilhelm Wundt (1832–1920)

INTERFOTO/Alamy Stock Photo Voluntarism Conscious experience can be studied scientifically. Edward Titchener (1867–1927)

Fotosearch/Stringer/Getty Images

1-4 What Are Psychological Perspectives? William James, the Freudians, and the behaviourists all tried to answer psychological questions with a comprehensive “big theory” approach. However, it is difficult to build a big theory without a large body of experimental data, and psychology was still a young science. To fill this gap, psychological scientists began to build a database by specializing in more specific points of view, or perspectives. By focusing on one part of the discipline, as opposed to trying to answer everything at once, psychologists began to gain an in-depth understanding of at least one aspect of mind at a time. By the second half of the 20th century, most psychologists were examining psychological phenomena from one of a handful of perspectives. The use of different perspectives does not imply disagreement or conflict. In most cases, the use of each perspective depended on specialized expertise and methods, so different fields of psychology became characterized by their distinct theories and methods. For example, understanding how a child learns a new vocabulary word would be investigated using different theories and methods by the biological, developmental, cognitive, social, or behavioural psychologist. Reflecting the traditional divisions of the field, it is common today for psychologists to refer to themselves as “social psychologists,” “developmental psychologists,” and so on, indicating their area of specialization and interest. Psychology departments of universities often continue this organization, and students applying to graduate school in psychology might specialize in one particular area, like choosing an undergraduate major.

1-4a Five Perspectives of Psychology The need to consider major perspectives in psychology was reinforced in a report titled “Strengthening the Common Core of the Introductory Psychology Course,” published in 2014 by the American Psychological Association (American Psychological Association, 2014). We have already seen how the various perspectives might address the question of why some children don’t like broccoli. To further illustrate the distinctions among some of the main perspectives, we will consider how each might approach the question of human memory, discussed in detail in Chapter 9. Biological psychologists explore the relationships among mind, behaviour, and their underlying biological processes. They often use technology such as functional magnetic resonance imaging (fMRI). Scott Grafton of the University of California, Santa Barbara, is pointing out the features of the brain of one of the authors of this book.

Courtesy of Scott Grafton, UCSB Brain Imaging Lab. Photo © Roger Freberg Biological psychology, also called behavioural neuroscience, focuses on the relationships between mind and behaviour and their underlying biological processes, including genetics, biochemistry, anatomy, and physiology. In other words, biological psychologists are interested in the physical mechanisms associated with behaviour. In addition to the basic behavioural genetics presented in Chapter 3 and the biological psychology presented in Chapter 4, this perspective is emphasized in Chapter 5 (sensation and perception), Chapter 6 (consciousness), and Chapter 7 (motivation and emotion). As Chapter 4 will show, technological advances beginning in the 1970s, especially new methods for observing brain activity, initiated an explosion of knowledge about the connections between brain and behaviour. Using these new technologies, biological psychologists have approached the question of storage and retrieval of memories in many ways, ranging from observing changes in communication between nerve cells in slugs to investigating the effects of stress hormones on the ability to form memories. A branch of the biological perspective, evolutionary psychology, attempts to answer the question of how our physical structure and behaviour have been shaped by their contributions to our species’ survival. This perspective should sound familiar—it is a modern extension of James’s functionalism, discussed previously in this chapter. Earlier, we also saw evolutionary psychology at work in the shaping of our sensitivity to bitter tastes. The basic principle of evolutionary psychology is that our current behaviour exists in its present form because it provided some advantage in survival and reproduction to our ancestors. An evolutionary psychologist might be interested in our good memory for faces, and particularly for faces of people who have cheated us in the past (Barclay & Lalumière, 2006). In the world of the hunter–gatherer, being cheated out of a fair share of the hunt was likely to lead to starvation for a family, and people who could not keep track of the cheaters were unlikely to survive and reproduce. Evolutionary psychologists are interested in how our modern behaviours are shaped by our species’ history.

Enlarge Image

Publiphoto/Science Source; EPA/Newscom Cognitive psychology focuses on the process of thinking, or the processing of information. Because our ability to remember plays an integral part in the processing of information, a cognitive psychologist is likely to have a lot to say about the storage and retrieval of memories. A cognitive psychologist might ask why processing seems different when we are trying to remember names and dates while taking a history test compared to remembering how to ride a bicycle. What processes lead to the frustrating experience of having something on the “tip of your tongue,” in which you remember the first letter or a part of a word you’re trying to retrieve, but not the whole thing? What strategies can we use to make our memories more efficient? These and similar issues are addressed in Chapters 9 and 10. Cognitive psychologists investigate the ways that the human mind processes information. This cognitive psychologist is studying the use of mirrored images to help individuals overcome phantom pain due to the loss of a limb. Seeing images of what appears to be a healthy limb in place of the missing limb changes the way the mind thinks about the missing limb, leading to a reduction in perceived pain.

Pascal Goetgheluck/Science Source Developmental psychology explores the normal changes in behaviour that occur across the life span. Using the developmental perspective, a psychologist might look at how memory functions in people of different ages. Without further practice, 3-month-old babies can retain for about a month the memory that kicking moves a mobile suspended above their crib (Rovee-Collier, 1997). However, most adults have difficulty recalling events that occurred before the age of 3 or 4 years. Teens and young adults are able to remember names faster than are older adults (Bashore, Ridderinkhof, & van der Molen, 1997). These and other age-related changes are explored in Chapter 11. Social and personality psychology describes the effects of the social environment, including social and cultural diversity, and individual differences on the behaviour of individuals (see Figure 1.8). Social and personality psychologists recognize that we construct our own realities and that the social environment influences our thoughts, feelings, and behaviour. Early psychologists were limited in their understanding of mind by their exclusive focus on their own sociocultural contexts. More recently, social psychologists have emphasized the need to explore the influences of sociocultural context and biology on our behaviour. Returning to our memory example, the social psychologist might ask how being in the presence of others influences the storage and retrieval of data. When we are sitting comfortably in our own homes, the answers to Jeopardy! questions come relatively easily. In front of millions of viewers, however, we might be lucky to remember our own names. Figure 1.8 Personality and Social Media. The emergence of social media has provided social and personality psychologists with new methods for examining the social mind and individual differences. What do your updates say about your personality? When the updates of more than 70 000 people who completed a Facebook personality test were examined, clear differences in their choices of words were observed. (Data from Park et al., 2015.)

Enlarge Image

© Cengage Learning Developmental psychologists look at the behaviour that is typical for people of certain ages, from infancy to old age. The amount of time this infant spends looking at moving stick figures helps us understand at what point in life we perceive biological motion.

Thierry Berrod, Mona Lisa Production/Science Source Although much of psychology explores how the average person thinks, feels, or acts, some people are not average. Behaviour can vary dramatically from one individual to another as a function of personality factors and many aspects of diversity, including age, gender and gender identity, sexual orientation, race, ethnicity, disability status, and socioeconomic status. Using our example of memory, we can see how individual differences in “need for cognition” can predict memory for verbal material (Cacioppo, Petty, Feinstein, & Jarvis, 1996). People who have a high need for cognition enjoy mental challenges, like solving difficult puzzles. As Chapter 13 will explore, individuals who are high in need for cognition also remember more of the messages to which they are exposed and respond differently to persuasive messages. Finally, the clinical psychology perspective seeks to explain, define, and treat psychological disorders, as explained in detail in Chapters 14 and 15. More recently, the clinical perspective has expanded to include the promotion of general well-being and health, which is described in Chapter 16. Many types of psychological disorders affect memory. Freud believed that traumatizing experiences were more difficult to remember, a process that he labelled repression (which will be discussed later, in Chapters 9 and 14). In other cases, war veterans and others who have experienced trauma might be troubled by memories that are too good, producing intrusive flashback memories of disturbing events. Social psychologists explore the effects of the social environment on our individual behaviour. In this example, the man in the middle is deciding whether to conform with the other two men in a simple judgment of line length.

© Joel Gordon 2001

1-4b A New Connectivity: Integrating Psychology’s Five Perspectives Although the 20th-century perspective approach to psychology generated detailed understanding of aspects of behaviour and mental processes, it has become apparent that single perspectives are insufficient for fully describing and explaining psychological phenomena. Armed with in-depth research results compiled from these various perspectives, many psychological scientists in the 21st century have returned to the more comprehensive view of the mind envisioned more than 100 years ago by James (Cacioppo, 2013). Their questions and methods are more likely to blur the lines of the perspectives outlined earlier, often with remarkable results. For example, a full understanding of romantic relationships is more likely to emerge from combinations of perspectives than through the use of single perspectives. “Zooming out” to combine an understanding of cultural and social contexts, biological factors (such as the “bonding” hormone oxytocin), personality (individual traits), social experience (such as self-fulfilling prophesies), cognitions (such as automatic thought), and the effects of psychological disorders (as in borderline personality disorder) gives us a more comprehensive view of the phenomenon. (see Figure 1.9). Figure 1.9 Using Multiple Perspectives Can Help Us Better Understand Complex Phenomena Like Anxiety. What factors contribute to our feelings of anxiety? What accounts for the variability we see in individuals’ experiences of anxiety and their ability to cope? Single perspectives provide considerable insight, but combining perspectives gives us a richer understanding of human behaviour.

© iStockphoto.com/rbimages We don’t have a crystal ball that will allow us to foresee psychology’s future. However, we strongly believe that this future will involve combining and integrating new and existing perspectives. Many of these new ways of looking at the mind will take advantage of the revolution in techniques for studying the brain that began in the 1970s and continues. Already, today’s cognitive neuroscientists investigate the brain as an information-processing system and search for the biological basis of topics such as attention, decision making, and memory. Social neuroscientists investigate the biological factors that vary with people’s feelings and experiences of social inclusion, rejection, or loneliness. Behavioural neuroscientists pick up previous lines of research on learning, memory, motivation, and sleep, and search for connections between these processes and our biology. Clinical and counselling psychologists are likely to consider biological processes in their theories about the causes of psychological disorders. By merging the five perspectives of mind, we stand a better chance of tackling the remarkable problem of understanding the human mind (see Figure 1.10). Figure 1.10 Contemporary Psychology Integrates Five Perspectives. Viewing the mind by zooming in and using specialized perspectives has led to significant increases in our understanding, but 21st-century psychology is characterized by efforts to zoom out and integrate multiple, cross-cutting perspectives.

© Cengage Learning; Source: Data from Stamm et al. (2016). Clinical psychologists seek to understand and treat psychological disorders.

© iStockphoto.com/Alina555 Diverse Voices in Psychology Culture and Diversity as “Cross-Cutting Themes” in Psychology In Addition to the Five Psychological Perspectives reviewed in this chapter, a more general perspective provided by culture and diversity is also essential to our understanding of behaviour. The guidelines proposed for the introductory psychology course by Gurung et al. (2014) emphasize the need to use culture and diversity as a “cross-cutting theme.” In accordance with this view, each of the editions of this textbook has been crafted following the guidance provided by Trimble, Stevenson, and Worell (2003) about integrating issues of diversity into all relevant discussions in an organic way. In addition to incorporating diversity seamlessly throughout our topics, we also believe that it is useful to provide opportunities to highlight diversity issues in greater depth, which is the purpose of having this feature in each chapter. As suggested by Betancourt and López (1993, p. 636), “psychology as a discipline will benefit both from efforts to infuse culture in mainstream research and theory and from efforts to study culture and develop theory in cross-cultural and ethnic psychology.” This first chapter has reviewed psychology’s historical timeline and explored some of the career paths that psychologists can follow. From a diversity perspective, we see that the history of psychology features a dramatic underrepresentation of ethnic and racial minorities and a very short timeline. And this is not only a historical problem—a recent list of the “50 Most Influential Living Psychologists” was 78 percent male and 100 percent white (Son, 2018)! This lack of diversity was quickly noticed and responded to on social media, with psychologists nominating additional individuals they believed should be included on an expanded list (e.g., the aforementioned Brenda Milner, who had not made it onto the original list). In Canada, the profession of psychology also suffers from an extreme lack of Indigenous representation. A report by the Canadian Psychological Association and the Psychology Foundation of Canada (2018) indicates that the number of Indigenous psychologists practising or teaching in Canada is likely fewer than 12. Given that Indigenous Peoples (including the First Nations, Métis, and Inuit) make up approximately 5 percent of the total population of Canada, this is obviously a significant issue. Reasons for the underrepresentation of Indigenous people within the psychological community are myriad and complex. Barriers for Indigenous students into higher education psychology programs include not only financial constraints and limited access to bursaries and scholarships, but also much deeper issues relating to the fact that there is currently a complete disconnect between Western forms of scholarship and traditional Indigenous ways of knowing. The Canadian Psychological Association and Psychology Foundation of Canada task force response to the Truth and Reconciliation report (2018) also indicates that while the discipline needs more Indigenous psychologists and clinicians appropriately training in traditional ways of knowing, Indigenous students seeking education should not be required to leave their communities and culture. Historically, the profession of psychology has failed the Indigenous Peoples in profound ways, including but not limited to the role that psychologists played in supporting federal policies such as the residential school system and forced adoption initiatives. Throughout the text, additional examples of ways in which psychology has disrespected or harmed Indigenous Peoples will be highlighted, such as the discussion of genetic research in Chapter 3. In particular, Chapters 14 and 15 will address the many ongoing issues relating to the assessment and treatment of mental health issues among members of Indigenous communities. Indigenous worldviews include the holistic framework where the whole person (physical, emotional, spiritual, and intellectual) is seen as being interconnected to land and in relationship to others (family, communities, nations).

Reprinted by permission of Dr. Michelle Pidgeon. Thinking Scientifically Can the Use of a Single Perspective be Misleading? We have argued that restricting our thinking about an aspect of mind to the information provided by one perspective can result in an incomplete picture, but can this single-perspective approach actually lead us in the wrong direction? The answer to that question is a resounding “yes.” Consider the following example. For many years, researchers were puzzled in their efforts to understand the relationships between child maltreatment and later antisocial behaviour. Although maltreatment often seems to be linked to later criminal behaviour, the majority of maltreated children do not become delinquents or adult criminals (Caspi et al., 2002). To solve this dilemma, a clinical psychologist might focus on environmental factors, such as the presence of a trusted adult or delinquent peer group, or personal factors, like resilience. Working in parallel, biological psychologists know something about certain candidate genes and their relationships with aggressive behaviour in animals. In particular, animals with a low-activity version of the MAOA gene seemed to be more aggressive than animals with a higher-activity version. However, links between variations in MAOA and human aggression are not clear. Separately, neither group of psychologists is likely to do a very good job of explaining why some children exposed to maltreatment engage in antisocial behaviour while others do not. The solution, however, becomes apparent when we combine the clinical psychologists’ observations of environmental factors with the genetic information provided by the biological psychologists. It appears that a gene—environment interaction takes place, in which children with the low-activity version of the MAOA gene responded to maltreatment by becoming antisocial, while children with the higher-activity version did not (Caspi et al., 2002; Fergusson, Boden, Horwood, Miller, & Kennedy, 2011). Restricting ourselves to one perspective might cloud our understanding, but combining perspectives leads us to an accurate conclusion. Experiencing childhood maltreatment does not reliably predict aggressiveness in youth, but combining genetics and exposure to maltreatment provides a clearer picture.

DNA Strand: Science Picture Co/Superstock; Teenager: Suzanne Tucker/ Shutterstock.com

1-5 What Does It Mean to be a Psychologist? Psychology is unlike some other disciplines in which people with a bachelor’s degree can refer to themselves as practising members of the relevant profession, such as chemists or biologists. Calling oneself a psychologist is restricted to holders of graduate (usually doctorate) degrees. It is not uncommon for students pursuing (or thinking about pursuing) a bachelor’s degree in psychology to wonder “What can I do with this degree?” But students who graduate from psychology programs end up pursuing careers in all sorts of different fields, some which require further study, and some which do not. Students who study psychology in university learn how to apply psychological knowledge in order to solve problems, to think critically, write effectively, act ethically, collaborate with others, organize and interpret empirical data, and so on. These types of skills are highly valued by employers (The Premier’s Highly Skilled Workforce Panel, 2016). Some people with undergraduate degrees in psychology prefer employment in fields that are directly related to psychology, such as working in research facilities or rehabilitation centres for drug abuse or brain damage. Others are quite successful in a variety of people-oriented jobs, such as those found in management, sales, customer service, public affairs, education, human resources, probation, and journalism. This diversity of career pathways reflects the hub nature of psychology to other fields, as described earlier in this chapter. Jon Stewart, host of The Daily Show on Comedy Central from 1999–2015, has a bachelor’s degree in psychology from the College of William and Mary.

AF Archive/Alamy Stock Photo Many students who earn a bachelor’s degree in psychology decide to pursue further studies, enrolling in professional or graduate school programs. Some students choose to enter programs that are directly related to psychology, for example, choosing to pursue a graduate degree in clinical or experimental psychology. Others choose to pursue degrees in related fields, including nursing, medicine, occupational therapy, criminology, law, marketing, and so on. A degree in psychology is excellent preparation for a wide-range of professional pursuits. More than half of people holding graduate degrees in psychology work in health care, counselling, financial services, or legal services professions (Stamm, Lin, & Christidis, 2016). Graduates with a master’s degree in psychology, usually requiring one to two years of additional study past the bachelor’s degree, can obtain licensing as therapists in most provinces and territories. As discussed further in Chapter 15, each province and territory in Canada has its own licensing requirements for those wishing to practise psychology. Only practising therapists with a doctorate (PhD) in psychology can refer to themselves “doctors of psychology.” Although a master’s degree has traditionally been sufficient for licensing, the trend is toward requiring doctoral degree for certain positions. For example, school psychologists participate in academic and career counselling, as well as the identification and remediation of problems that interfere with student success. They are typically employed by publicly funded school districts or boards. Forensic psychologists attempt to understand the criminal mind and to develop effective treatments for criminal behaviour.

© iStockphoto.com/BirdofPrey Many people working in psychology have earned doctoral degrees, which typically require five to six years of study beyond the bachelor’s level. More women earn PhDs in psychology than men, although this overall trend is not equivalent across all subfields of psychology (e.g., the relative proportion of women earning doctorates in developmental psychology is much higher than those earning doctorates in behavioural neuroscience). As shown in Figure 1.11, about 34 percent of new doctoral-level psychologists do what your professor and the authors of this textbook do: teach and/or conduct research in higher-education settings. About 45 percent of new doctoral-level psychologists work as therapists. Smaller numbers of new doctoral-level psychologists find employment in business and government settings, elementary and secondary schools, and other related fields. These statistics are based on data collected in the United States, as comparable Canadian statistics are unfortunately unavailable; however, it is reasonable to assume that the patterns are relatively similar across both countries. Figure 1.11 Individuals Earning Psychology Degrees Work in a Variety of Settings. Most psychologists with graduate degrees are employed in clinical and higher-education settings, but opportunities for students of psychology also exist in schools, businesses, government, and other places in which an understanding of human behaviour is helpful.

© Cengage Learning Source: Data from Stamm et al. (2016). John Dunn is a sports psychologist from the University of Alberta. He serves as a mental support coach to many elite Canadian athletes, including the Canadian Olympic curling team, and has been a member of Canadian support staff at numerous Winter Olympic games, including the 2018 games in PyeongChang.

THE CANADIAN PRESS/Adrian Wyld Psychologists entering doctoral programs traditionally identify with one of the major perspectives discussed earlier, such as social, cognitive, or biological. Choosing a graduate perspective is similar to choosing an undergraduate major. Although all psychology graduate students might take core courses in research methods and statistics, they typically pursue coursework and research in their particular area of specialization. However, training of psychologists in the 21st century is beginning to reflect the connections occurring in the field. Increasingly, students are being trained in combined specialties (e.g., social cognitive neuroscience) as psychology becomes an increasingly integrated field of study. The most rigid distinction occurs between graduate students who plan to specialize in clinical or counselling psychology and those who do not. The clinical or counselling track includes extensive internships and supervised training prior to government-regulated licensure that usually add at least one year to students’ graduate studies. Do not assume that your psychology professors are all therapists; it is most likely they are not. It is important to distinguish between therapists with doctoral degrees in psychology (PhDs or PsyDs) and psychiatrists, who are medical doctors (MDs). The biggest difference between the two professions is that psychiatrists can prescribe medication, but psychologists cannot. In Canada, provincial and territorial health care plans will typically cover the cost of a psychiatrist, but not a psychologist. In Chapter 15, we will provide more detail about the types of therapists who treat adjustment problems and psychological disorders. Psychology Takes on Real-World Problems Using Psychology to Help Solve Real–World Problems As you have seen in this chapter, psychological science is the study of behaviour, mental processes, and brain functions. As such, psychology has much to say about the causes and solutions of contemporary human problems. To illustrate psychology’s power to contribute to the understanding and solution of these problems, each chapter will focus on a “big problem” from the point of view of the chapter content. While each chapter addresses only a part of the problem, by the time you reach the end of the textbook, you will have been exposed to many ways that psychology can contribute to solving some of the biggest challenges that human beings face today, such as pollution, climate change, education, poverty, terrorism, pandemics, food insecurity, crime, and social injustice.

Alf Ribeiro/ Shutterstock.com Summary 1.3 Five Psychological Perspectives

Perspective
Things to remember
   

Courtesy of Scott Grafton, UCSB Brain Imaging Lab. Photo © Roger Freberg; Publiphoto/Science Source Biological and evolutionary psychology Investigates the connections among mind, behaviour, and biological processes, and asks how our evolutionary past continues to shape our behaviour

Pascal Goetgheluck/Science Source Cogn

Chapter Summary Chapter Summary Although humans have long asked questions about the mind, it was not until the late 1800s that researchers started examining the basic processes of the mind and behaviour from a scientific perspective. Over the past 150 years, psychology has undergone many shifts regarding the types of questions that should be asked and the experimental approaches that should be taken to answer them. What has emerged is a broad discipline that attempts to understand the mind, brain, and behaviour from multiple levels of analysis. In part because of this multilevel approach to understanding, psychological research has interconnections with a wide range of other disciplines, giving it the distinction of being a “hub” science. Importantly, the discipline of psychology is now in a position of recognizing the inherent limits of understanding any complex human behaviour from a single perspective. More recent trends involve the integration of what were once considered separate domains of psychology, as contemporary psychologists understand that a comprehensive understanding of any human behaviour requires the integration of knowledge from each of the five perspectives outlined in this chapter. When you read about the different psychological disorders covered in Chapter 14, you will be able to see this integrative approach in action.

ch 1 Key terms Key Terms The Language of Psychological Science Be sure that you can define these terms and use them correctly.

  • behaviourism
  • biological psychology
  • clinical psychology
  • cognitive neuroscience
  • cognitive psychology
  • cultural diversity
  • developmental psychology
  • evolutionary psychology
  • functionalism
  • Gestalt psychology
  • humanistic psychology
  • introspection
  • mind
  • natural sciences
  • personality
  • philosophy
  • psychology
  • social psychology
  • structuralism
  • voluntarism

Chapter 2 intro Chapter Introduction Scientific methods, including brain imaging, have allowed researchers to pinpoint structural and functional differences between the brains of fluent speakers and those of people who stutter.

Enlarge Image

Argosy Publishing, Inc. Learning Objectives

  1. Distinguish scientific reasoning from common sense.
  2. Assess the use of case studies, naturalistic observations, surveys, focus groups, and interviews to describe behaviour.
  3. Analyze the key features, strengths, and limitations of correlational and experimental methods.
  4. Distinguish between reliability and validity.
  5. Differentiate descriptive and inferential statistics.
  6. Critique the ethical guidelines for using human and animal participants in research. Among the many Celebrities who Stutter is British actress Emily Blunt. Although she is famous for (among other things) playing Mary Poppins, the nanny who is “positively perfect in every way,” Blunt has revealed that she was bullied for her stutter as a child and would do impressions because she was less likely to stutter if she was speaking as someone else. A teacher encouraged Blunt to try out acting, which she says helped her gain more confidence with her speech. Blunt now works for organizations that try to help people who experience problems with stuttering. Stuttering involves disruptions in normal speech production, such as repeating the starting letter of a word (b-b-b-bird), holding a vowel sound for a long time (ah ah ah), or having difficulty initiating speech. About 5 percent of the population experience stuttering (Månsson, 2000), with males two to five times more likely to stutter than females (Craig & Tran, 2005). Stuttering has been described since the days of the ancient Greeks and occurs across all cultures and ethnicities. Many interesting myths exist regarding its causes and remedies (Kuster, 2005). South African traditions suggest that stuttering results from leaving a baby out in the rain or tickling the infant too much. A Chinese folk remedy for stuttering was to hit the person in the face when the weather was cloudy. In Iceland, people believed that a pregnant woman who drank from a cracked cup was likely to produce a child who stuttered.

PictureLux/The Hollywood Archives/Alamy Stock Photo Many of us might ask, “How could anybody believe these things?” But how do we know these are merely myths and not facts? Instead of dismissing these efforts to explain and predict, think about what might have happened to lead people to these particular conclusions. What is missing from these conclusions is a system for reaching logical, objective results. Science provides us with this system. What does science have to say about stuttering? Based on the careful evaluation of stuttering using the methods outlined in this chapter, scientists have concluded that there are multiple causes for stuttering (Kang et al., 2010). Many cases seem to have a basis in genetics, which is discussed in Chapter 3 (Raza et al., 2015). Scientists have used brain-imaging technologies to zoom in on brain structures and functions that appear to differ between stutterers and fluent speakers, with stutterers showing more activation of the right hemisphere during speech (Gordon, 2002). As mentioned in the previous chapter, some of the most thorough explanations combine multiple psychological perspectives (Ward, 2013). A complete explanation of stuttering zooms back out to combine a predisposition for the problem resulting from genetics and biology with developmental, emotional, and social factors, like feeling embarrassed or anxious about speaking in front of peers. Although there is no “cure” for stuttering, carefully tested scientific explanations combining input from various perspectives are leading to more effective treatments. In this chapter, you will learn how science provides a system that allows us to construct increasingly realistic models of the world around us. Science has provided explanations for many natural phenomena, like lightning, that were probably quite frightening for our ancestors.

steineranden/ Shutterstock.com

2-1 What Is Science? Throughout human history, we have been motivated to understand, predict, and control the world around us. To meet these goals, we need methods for gaining knowledge. We often take contemporary scientific knowledge for granted, but our ancestors did not enjoy the benefits of science while trying to explain and predict their world. Early in history, people attempted to understand natural phenomena by applying human characteristics to nature (Cornford, 1957). Skies could look angry or a lake could be calm. Other explanations involved spirits inhabiting humans and all other objects. Earthquakes and illness were viewed as the actions of spirits, and people attempted to influence these spirits through magical rituals. Later on, people looked to authorities, such as religious leaders and philosophers, for explanations of natural phenomena. People often form strong beliefs about their world based on faith, which literally means “trust.” Faith is belief that does not depend on logical proof or evidence. We might accept friends’ excuses for being late based on our faith in their honesty, without knowing for certain whether they are telling the truth. In contrast to faith, science requires proof and evidence. The word science comes from the Latin scientia, which means “knowledge.” Science doesn’t refer to just any type of knowledge, but rather to a special way of learning about reality through systematic observation and experimentation. The methods described in this chapter are designed to supply that evidence. Throughout history, people have often turned to authorities, such as religious leaders, instead of to science. The astronomer Galileo Galilei was interrogated as part of the Roman Inquisition for believing that the Earth was not the centre of the universe.

Stefano Bianchetti/Corbis Historical/Getty Images

2-1a The Scientific Mindset Not all observations are scientific. How does science differ from everyday observations, like the belief that “opposites attract”? As you will learn in Chapter 13, “opposites” do not, in fact, find each other very attractive. First, science relies on objectivity, rather than subjectivity. Objectivity means that conclusions are based on facts, without influence from personal emotions or biases. In contrast, subjectivity means that conclusions reflect personal points of view. For example, take a moment right now to bring to mind an image of the mascot from the game Monopoly (whose official name is Rich Uncle Pennybags). Does he have a moustache? Is he wearing a top hat? Does he have a monocle? If you’re like most people, you probably answered “yes” to all three questions. However, the correct answer to the last question is “no”—the Monopoly man does not (and has never) worn a monocle. This type of collective misremembering has been dubbed the “Mandela effect.” But why does it occur? Why would so many people believe that the Monopoly man has a monocle? Simply put, having a monocle fits in with our stereotypes of what a “rich man” would look like (and Mr. Peanut, for the record, does wear a monocle). The point is that our memories, and the conclusions that we draw from such memories, are prone to error and bias. Even when we make direct observations that do not rely on memory, bias can sneak in and colour our judgments. Many people falsely believe that Rich Uncle Pennybags, pictured here, wears a monocle.

Lynne Sutherland/Alamy Stock Photo Scientists strive to be objective, but any observation by a human is, by definition, subjective. Recognizing when we are being subjective can be difficult, so scientists cannot rely on their introspections to maintain objectivity. Most of us have had the experience of witnessing an accident in the presence of other people. It can be astonishing to hear the different accounts of what happened. Didn’t we all see the same thing? Individuals like to believe that their own view of the events is the accurate one. As discussed in Chapters 9 and 10, objective facts can be altered easily when processed subjectively by individuals. The scientific methods described in this chapter promote objectivity and help prevent biased, subjective observations from distorting a scientist’s work. The second important difference between science and everyday observations is the use of systematic as opposed to hit-or-miss observation. By “hit or miss,” we mean making conclusions based only on whatever is happening around us. If we want to make conclusions about the human mind, we cannot restrict our observations to our immediate circle of acquaintances, friends, and loved ones. Our observations of the people we see frequently are probably quite valid. It’s just that the people we know represent a small slice of the greater population. For example, we might be surprised to learn that our favourite candidate lost an election because “everyone we know” voted for that candidate. Based on hit-or-miss observations, high school students in Newfoundland might believe that e-cigarette use is extremely common among all Canadian youth. And this would make sense, given that 32 percent of Newfoundland students in Grades 10–12 report using an e-cigarette in the past 30 days (Health Canada, 2017). However, e-cigarette use varies widely by region (see Figure 2.1), as well as by age and gender. For example, less than 10 percent of Grade 10–12 students in Ontario reported having used an e-cigarette in the past 30 days, and this number is even lower if you ask younger students or only females. In order to get an accurate picture of the world, we need to move beyond our own biased and limited observations and rely on methods such as those described in this chapter. Figure 2.1 Scientific Observations Are Systematic, not Hit or Miss. Science provides ways to make systematic observations. Judgments that we make based on the people we know might not apply to larger groups of people. Based only on personal experience, high school students living in Newfoundland and Ontario might disagree about the prevalence of e-cigarette use.

Source: Adapted from Health Canada (2017). Detailed tables for the Canadian Student Tobacco, Alcohol and Drugs Survey 2016-17 (Table 6), https://www.canada.ca/en/health-canada/services/canadian-student-tobacco-alcohol-drugs-survey/2016-2017-supplementary-tables.html#t6. Finally, science relies on observable, objective, repeatable evidence, whereas everyday observation often ignores evidence, especially when it runs counter to strongly held beliefs and expectations. In a classic study, Hastorf and Cantril (1954) asked Princeton and Dartmouth students to report on their memories of the last football game of the season, a brutal game that resulted in a broken nose for Princeton’s star player and a broken leg for a Dartmouth player. Although they had all witnessed the same game, students from the two schools had very different recollections of what had happened. Even after watching the game again on film, the students preconceived biases of who was justified in their behaviour (their own team) and who was unsportsmanlike (the other team) held firm. After a particularly rough football game between the 1951 Dartmouth Indians and Princeton Tigers, social psychologists Albert Hastorf and Hadley Cantril examined student perceptions of the event. They found that Dartmouth and Princeton students formed very different interpretations of this same “objective” event—for example, the Princeton students “saw” the Dartmouth team make over twice as many rule infractions as were seen by the Dartmouth students. People see what they want to see.

Bettmann Archives/Getty Images Many people are convinced that women talk more than men. If you hold this belief, you are likely to notice and remember instances that support your belief more than instances that contradict it. This difference in attention and memory is termed confirmation bias, and it represents one reason why objective and systematic observation are so important in scientific inquiries. Careful scientific studies have called into question the belief that women talk more than men. One group of researchers recorded students’ talking throughout the day and concluded that “the widespread and highly publicized stereotype about female talkativeness is unfounded” (Mehl, Vazire, Ramírez-Esparza, Slatcher, & Pennebaker, 2007, p. 82). Other studies make the argument that in most circumstances, men may actually talk more than women (James & Drakich, 1993; Leaper & Ayres, 2007). Scientific knowledge is both stable and changing. It is a work in progress, not a finished product. The fact that we may learn something new tomorrow should not make you assume that today’s knowledge is flawed. Most change occurs slowly on the cutting edges of science, not quickly or at the main core of its knowledge base. Unlike with many other fields, we expect science to improve over time. An important feature of scientific literacy is to learn to be comfortable with the idea that scientific knowledge is always open to improvement and will never be considered certain (AAAS, 2009). In part because there are many things that we can never know for certain, being a good scientist (and a good critical thinker) involves approaching things with a healthy dose of skepticism. This means that we want to avoid accepting things at face value, and instead think critically about the claims and information we are presented with in books, articles, and newspapers. Scientific research results do not support the common stereotype that women talk more than men.

Chris Schmidt/ iStockphoto.com

2-1b The Importance of Critical Thinking Critical thinking, or the ability to think clearly, rationally, and independently, is one of the foundations of scientific reasoning. The skilled critical thinker can follow logical arguments, identify mistakes in reasoning, prioritize ideas according to their importance, and apply logic to personal attitudes, beliefs, and values. Failures of critical thinking contribute to bad decisions, gullible voters, patient deaths, financial mismanagement, academic failure, and many other undesirable outcomes (Facione, 2013). The development of critical thinking skills and habits is often cited as one of the most important goals of an undergraduate education. As a student, there are numerous reasons why you should be interested in developing these skills. Much of the content included in this book (and other textbooks you read) will be updated and modified as time goes on and we learn more about the mind and behaviour. The specific knowledge that you learn in university will not always be relevant to your future self. However, being able to think critically about this knowledge is a transferable skill that will serve you well no matter what the future holds. Critically thinking about psychology is particularly important because psychology is so relevant to daily life (e.g., coping with stress, improving relationships, learning effectively). Critical thinking is not built in; rather, it involves the development of habits, skills, and mindsets that can be continually improved with practice. Importantly, just because someone is “smart” is no guarantee that they will engage in good thinking practices. As will be discussed in Chapter 10, even very intelligent people can make irrational decisions or reach faulty conclusions. But where to start? You can begin by using five critical thinking questions to evaluate new information you come across in your everyday life, starting with what you read in this textbook (Bernstein, 2011):

  • What am I being asked to believe or accept?
  • What evidence supports this position?
  • Are there other ways that this evidence could be interpreted?
  • What other evidence would I need to evaluate these alternatives?
  • What are the most reasonable conclusions? It is also helpful to recognize the signs that you are not thinking critically (excerpt taken from Lau, 2016):
  • I prefer being given the correct answers rather than figuring them out myself.
  • I don’t like to think a lot about my decisions, as I rely only on gut feelings.
  • I don’t usually review the mistakes I have made.
  • I don’t like to be criticized. It is very easy to slip into biased or faulty reasoning without even noticing it. Imagine you’re on your third cup of coffee of the day, and you casually type into Google “is coffee bad for you?” On the surface, this seems like a legitimate, unbiased strategy—after all, you didn’t type “reasons coffee is good for you” into the search bar, which would be a rather obvious case of confirmation bias. However, this search result undoubtedly reveals a confusing array of websites and headlines, some suggesting that coffee is good for you and some suggesting that coffee is bad. Which result do you click on? Your inclination is likely going to be to click on the headline that you want to be true. So, you click on the results indicating that coffee has many health benefits and ignore the ones suggesting that it can lead to health problems. And there it is—confirmation bias in action. Critical thinking is not only essential to good science, but also provides the underpinning of a free society. Our ability to think clearly, rationally, and independently gives us the confidence to question the actions of people in authority instead of engaging in blind obedience. We hope you will continue to practise good critical thinking skills long after you finish reading this textbook.

2-1c The Scientific Enterprise Learning scientific facts is not the same as understanding how science works. Science, including psychological science, is more than a collection of facts—it is a process (see Figure 2.2). Figure 2.2 Science as a Never-Ending Process. While the scientific method is often modelled as linear process, we have chosen to portray the steps of the scientific method as part of an infinite loop, in order to emphasize both the cyclical and never-ending nature of the scientific process. The loop starts out wide, narrows, and then expands again. The steps of scientific reasoning follow the same pattern. Beginning broadly with an examination of a phenomenon or research question, scientists then narrow their thinking to generate specific hypotheses and methods. After obtaining the results, they finally consider the broader implications of a study. From these implications, new questions are generated, and the process starts all over again. Science is never-ending, since we can never achieve complete certainty in our understanding of how the universe (or the human mind) functions.

Scientific Theories Science seeks to develop theories, which are sets of facts and relationships between facts that can be used to explain and predict phenomena (Cacioppo, Semin, & Berntson, 2004). In other words, scientists construct the best possible models of reality based on the facts known to date. Unfortunately, the English language can be the source of considerable confusion regarding the nature of scientific theories. In addition to its use in science, the word theory can be used in nonscientific ways to describe a guess, such as “I have a theory about why my professor seems unusually cheerful this morning,” or a hypothetical situation, as in “That’s the theory, but it may not work in practice.” Confusion over the multiple meanings of the word theory have led people mistakenly to view truly scientific theories, like the theory of evolution, as nothing more than casual guesses or hunches, rather than the thoroughly investigated and massively supported principles that they are. The best scientific theories not only explain and organize known facts, but also generate new predictions (see Figure 2.3). The word prediction comes from the Latin words for “saying before.” A scientific prediction is more than a guess or hunch. It is usually stated in a rigorous, mathematical form that allows the scientist to say that under a certain set of circumstances, a certain set of outcomes are likely to occur (if A, then B). In some cases, a theory’s predictions can be surprising. For example, you might believe that it’s impossible to be happy and sad at the same time. However, one model of emotion, discussed in Chapter 7, predicted that it is quite possible to feel happy and sad at the same time (Cacioppo, Berntson, Norris, & Gollan, 2012). This prediction was confirmed by research showing that first-year university students reported feeling either happy or sad, but not both, on a normal day of school, but experiencing both emotions simultaneously on the day they moved out of campus housing to go home for the summer. Figure 2.3 How to Develop and Test a Theory. Theory building begins with generating hypotheses that are then systematically tested. Hypotheses that are not rejected contribute to the theory and help generate new hypotheses.

© Cengage Learning Before attempting to generate your own scientific questions, it pays to become familiar with relevant theories and previous discoveries. As Sir Isaac Newton noted, scholars stand on the shoulders of giants (Turnbull, 1959)—we build on the work of those who came before us. New lines of research can also originate in observation. Scientists are observers not just in the laboratory, but also in everyday life. Scientific progress often takes a giant leap forward when a gifted observer recognizes a deeper meaningfulness in an everyday occurrence, such as when Newton observed a falling apple and considered its implications for a law of gravity. As we discovered in Chapter 1, Ivan Petrovich Pavlov realized that when his dogs learned to salivate to signals predicting the arrival of food, something more significant than slobbering dogs was happening. The learning that he observed explains why we get butterflies in our stomach before a performance and avoid foods that we think made us ill. Experiencing Psychology Using Critical Thinking to Evaluate Popular Press Reports Popular Press Reports often State, “Experts agree that” some fact or another is true. It is very important to use your very best critical thinking in evaluating these statements. The humour website Cracked.com often publishes “lists” from their contributors. One such list was called “28 Underrated Ways Life Is Different for Men and Women” (Cracked Readers, 2016). This list was the result of a contest in which the participants needed to submit at least one scientific article to back up their claim. This sounds like a good opportunity to apply the critical thinking skills that you’ve learned in this chapter. We’d like you to walk through this process, and then you’ll have a chance to try one of the other “facts” from this site or another of your choice. Coming in at #4 on the Cracked.com list is the statement, accompanied by an image of eye-tracking results, that “Men stare at men’s crotches a lot more than women do!”

  1. What Am I Being Asked to Believe or Accept? Our answer: We are being asked to accept the fact that men stare at men’s crotches “a lot more” than women do. Note that “a lot more” isn’t very specific. This could mean that more men than women stare at men’s crotches, or that men spend more time staring, or something completely different. The best scientific statements are very specific.
  2. What Evidence Supports This Position? Our answer: Here, things become very complicated. Cracked.com does not provide the source that their contributor used, so finding the original piece required some fancy googling. The result traces back to an in-house eye-tracking study conducted in 2005 by the Nielsen/Norman Group, experts in user experience research. So far as we can tell, the study was never published in a peer-reviewed scientific journal (a huge weakness), but it was written up by a number of blogs. The best report was a description in the USC Annenberg’s Online Journalism Review (Ruel, 2007). Ruel’s report noted that the 255 participants between the ages of 18 and 64, 58 percent female and 42 percent male, looked for different lengths of time at parts of a photo of baseball player George Brett while undergoing eye-tracking analysis. In addition, when asked to browse the American Kennel Club website, the Nielsen/Norman researchers found that men fixated longer on the dogs’ genitalia than the women did. A popular press website, Cracked.com, suggested that eye-tracking data was “proof” that “men stare at men’s crotches a lot more than women do.” Evaluating claims like this requires our best critical thinking skills.

Icon Sportswire/Icon Sportswire/Getty Images 3. Are There Other Ways That This Evidence Could Be Interpreted? Our answer: Eye-tracking data, handled well, can be an excellent research tool. However, we don’t know much about how fixation length was calculated by Nielsen/Norman. We don’t know if the “heat map” image in Cracked is from a single participant or represents an average of viewer responses. Although the image seems clear (women or a single female participant fixated only on the head and shoulders area), psychologists would usually want a quantitative measure of a supposed difference between two groups. Inferential statistics are needed before we can make conclusions about the behaviour of populations of men and women. 4. What Other Evidence Do We Need? Our answer: One of the key points discussed in this chapter is the concept of “generalization.” When we say “Men do something more than women,” we are making a very general statement about gender than crosses other variables like age, race or ethnicity, nationality, education, and so on. We would want to ensure that the Nielsen/Norman sample was truly representative before making such a claim. One of the biggest weaknesses in the interpretation of these data is the use of two types of stimuli—a photo of George Brett and photos of dogs. A much wider selection of stimuli would help us understand if a general principle were involved or whether people were just reacting to these particular stimuli. 5. What Are the Most Reasonable Conclusions? Our answer: While the result presented by Cracked.com is entertaining, which is the purpose of the website, it does not represent good science. We would want to see quite a bit more detail about the methods and analyses, along with publication in peer-reviewed sources, before we accept the conclusions as valid. Now It’s Your Turn Using our model critical thinking questions, explore one of the other Cracked.com list items or a popular press headline about psychology of your choice, and evaluate the item. Do you think you might have evaluated the claim differently before reading this chapter? Why or why not? A contemporary theory of emotion correctly predicted the circumstances for when we might experience mixed emotions of happiness and sadness. Graduation from university is an important accomplishment, but we might feel sad about leaving our friends.

Syracuse Newspapers/Dick Blume/The Image Works Generating Good Hypotheses Once you understand the theoretical foundations of your area of interest, you are ready to generate a hypothesis. A hypothesis is a type of inference, or an educated guess, based on prior evidence and logical possibilities (see Figure 2.4). A good hypothesis links concrete variables based on your theory and makes specific predictions. For example, researchers predicted that participants who viewed a video featuring a “Stress is good for you” message would show improved psychological symptoms and work performance relative to a group exposed to “Stress is bad for you” messages (Crum, Salovey, & Achor, 2013). The concrete variables in this study are exposure to the two different videos (“Stress is good for you” versus “Stress is bad for you”) and the measures of psychological symptoms and work performance. The researchers also must consider the possibility that there would be no difference in the effects of exposure to the video messages. Figure 2.4 Generating and Testing Hypotheses. Crum, Salovey, and Achor (2013) tested a hypothesis that viewing a “Stress is good for you” message or a “Stress is bad for you” message would produce different outcomes. Participants viewing the “Stress is good for you” messages experienced increased “soft” work outcomes (maintain focus and communicate), and “hard” work outcomes (quality, quantity, accuracy, and efficiency) relative to the “Stress is bad for you” and control groups.

Enlarge Image

© Cengage Learning Source: Crum, A. J., Salovey, P., & Achor, S. (2013). Rethinking stress: The role of mindsets in determining the stress response. Journal of Personality and Social Psychology, 104(4), 716–733. doi: 10.1037/a0031201 Scientists can never “prove” that a hypothesis is true because some future experiment, possibly using new technology not currently available, might show the hypothesis to be false. All they can do is show when a hypothesis is false. A false hypothesis must always be modified or discarded. Evaluating Hypotheses Once you have a hypothesis, you are ready to collect the data necessary to evaluate it. The existing scientific literature in your area of interest provides considerable guidance regarding your choice of methods, materials, types of data to collect, and ways to interpret your data. Thinking Scientifically Does Psychology Have a Replication Problem? One of the checks on science is the practice of replicating, or attempting to reproduce, scientific data. A scientist producing the original data might want to double-check their results, or other scientists might want to see if they can produce the same results (Simons, 2014). What happens if a study fails to replicate? When a group of nearly 300 psychologists led by Brian Nosek, known as the Open Science Collaboration (OSC), set out to replicate 100 studies from three well-respected psychology journals, the results were not encouraging (Nosek et al., 2015). Only 36 percent of the replications produced significant results, meaning that the researchers were unable to duplicate the original findings of the studies the majority of the time. The average effect size, or the strength of a phenomenon, in the replications was only about half of that reported in the original studies. In other words, if a study found that 50 percent of the difference in physical aggressiveness was due to hot weather, the replication might show that only about 25 percent of the difference in aggressiveness was accounted for by the weather. Does this mean that we have to throw out these results? While failure to replicate should give any scientist a reason to reflect on their results, other psychologists are not particularly alarmed by the findings of Nosek and his colleagues (Gilbert, King, Pettigrew, & Wilson, 2016). The methods used in the replication efforts were often different from the original study. For example, a sample of Italians substituted for Americans in a study of attitudes toward African Americans and a study of college students being called on by a professor was replicated with people who had not attended college. These differences in sampling, rather than the validity of the original study, might have been the key reason for finding different outcomes. Scientific debates usually move us in the right direction. Regardless of how big a replication problem psychology might or might not have, psychologists are looking for new methods to make their results even more reliable. For example, submitting a public “registered report” of the methods and statistical analyses that a researcher plans to run prior to conducting a study discourages any after-the-fact efforts to tweak data and results to find something interesting when the main purpose of the study fails. Sharing data, detailed methods, and results in public places allows others to evaluate a researcher’s findings. Addressing unrealistic pressure on academics to “publish or perish” and putting quality of research ahead of quantity should also contribute to a more robust scientific environment. One of the studies cited frequently in psychology’s replication crisis is a 2008 paper by Schnall, Benton, and Harvey that reported that hand-washing reduced the severity of participants’ moral judgments. Two efforts conducted by Johnson, Cheung, and Donnellan (2014) to replicate these findings failed, leading to a heated debate about what replication means.

Subbotina Anna/ Shutterstock.com Communicating Science Science is a vastly collaborative enterprise. Not only do we stand on the shoulders of giants because we benefit from the work that has been done previously, but we depend on many others in the scientific community to help us improve our work and avoid mistakes. Normally, this evaluation is done by submitting research to conferences or for publication. During this process, research undergoes peer review, in which it is scrutinized by other scientists who are experts in the same area. Only if other experts conclude that new research is important, accurate, and explained thoroughly will it be added to the existing body of scientific knowledge. To demonstrate the importance of this peer review, contrast this process to what happens when a person simply decides to transmit a tweet or launch a personal website. The author is solely responsible for the content, and there are no checks on the accuracy of that content. The methods of psychological science can help us evaluate questions like the effects, if any, of social media use on mental health outcomes.

Vesnaandjic/E+/Getty Images During peer review, research that fits with existing knowledge is typically accepted more rapidly than work that is less consistent with previous reports. Results often undergo replication, which means that other scientists independently attempt to reproduce the results of the study in question (Klein et al., 2014). If the data are replicated, they will be accepted quickly. If other scientists are unable to replicate the data, their extra effort will have prevented inaccurate results from cluttering the scientific literature. Although this process might slow the publication of some innovative research, the result—more accuracy—is worth the effort. Summary 2.1 Steps to Critical Thinking Questions for detecting good critical thinking Questions for detecting poor critical thinking (All answers should be no) What am I being asked to believe or accept? Do I prefer being given the correct answers rather than figuring them out myself? What evidence supports this position? Do I rely on gut feelings instead of thinking a lot about my decisions? Are there other ways that this evidence could be interpreted? Am I forgetting to review my conclusions to check for mistakes? What other evidence would I need to evaluate these alternatives? Am I oversensitive to criticism about my conclusions? What are the most reasonable conclusions?

2-2 How Do Psychologists Conduct Research? Psychological scientists use a variety of research methods, including descriptive, correlational, and experimental methods, depending on the type of question being asked. Descriptive methods, including surveys, case studies, and observations, provide a good starting place for a new research question. Correlational methods help psychologists see how two variables of interest, like the number of hours spent on social media platforms and symptoms of depression, relate to each other. Psychologists use experiments to test their hypotheses and to determine the causes of behaviour. In the next sections, we will describe the common research methods used in psychological science and then compare how they might be used to approach research questions regarding cyberbullying—bullying that takes place over digital devices such as phones and computers (e.g., through text messages or social media). Each method—descriptive, correlational, and experimental—provides a different view of the phenomenon in question, and each has a particular profile of strengths and weaknesses. Each requires different types of statistical analyses, which are described in more depth later in the chapter. Many psychological studies combine several of these methods. When similar outcomes are observed using multiple methods, we have even more confidence in our results. However, before jumping into these different methods, we will first review the process of operationalization, which is one of the first steps in any psychological research endeavour.

2-2a Constructs and Operationalizations Most psychological research involves the investigation of constructs. Constructs are internal attributes that cannot be directly observed but are useful for describing and explaining behaviour. Some examples of constructs include anxiety, intelligence, and extraversion. You might be thinking, “Wait a minute—of course, I can observe these things. My roommate was so anxious before her most recent date that she wouldn’t stop fidgeting and almost threw up!” But what you saw was not anxiety. You saw fidgeting movements and a run to the bathroom. Anxiety is the hypothetical, nontangible construct that we use to explain such observations. Whenever a researcher is interested in a examining a construct, one of the first things they need to do is decide how they want to make the non-tangible, tangible. If a psychologist wants to test the hypothesis that first-year undergraduates experience more anxiety than third-year undergraduates, they first need to figure out how to measure anxiety. This process of taking an abstract construct and defining it in a way that is concrete and measurable is known as operationalization. For any given construct, there will be multiple ways of measuring it. To assess anxiety, the psychologist might examine observable behaviours (e.g., fidgeting), have participants complete a self-report measure (e.g., “How anxious do you feel on a scale from 1 to 10?”), or use a galvanic skin response device to measure the amount of sweat on participants’ palms. Each of these methods provides an operationalization of anxiety, although some operationalizations may be more valid than others, as will be discussed later in the chapter.

2-2b Descriptive Methods Descriptive methods include case studies, naturalistic observations, surveys, focus groups, and interviews. As we have seen, personal observations and common-sense ideas are especially vulnerable to bias, but descriptive methods allow a researcher to make careful, systematic, real-world observations. Descriptive methods can illuminate associations between variables and establish prevalence rates. Armed with these scientific observations, the researcher will be in a strong position to generate hypotheses. The Case Study A case study provides an in-depth analysis of the behaviour of one person or a small number of people. Many fields, including medicine, law, and business, use the case study method. Psychologists often use case studies when large numbers of participants are not available or when a particular participant possesses unique characteristics, as in the case described in this section. Interviews, background records, observation, personality tests, cognitive tests, and brain imaging provide information necessary to evaluate the case. Case studies not only are a useful source of hypotheses, but also can be used to test hypotheses. If you did a case study on a planet outside our solar system and discovered life there, you would disprove a hypothesis that no life exists outside our solar system. One of the most productive case studies in psychology chronicled more than 50 years of examinations of Henry Molaison (1926–2008), known in the scientific literature as “the amnesiac patient H.M.” In 1953, Molaison underwent brain surgery to control his frequent, severe seizures. Although the surgery may have saved his life, he was left with profound memory deficits, which are described in Chapter 9. Through painstaking testing and evaluation of Molaison, psychologists learned a great deal about the brain structures and processes that support the formation of memories (Corkin, 2002). Canadian neuropsychologist Brenda Milner spent years working with Molaison, leading to profound discoveries such as the necessity of there being more than one long-term memory system. Even after his death, Molaison continues to contribute to our knowledge. Researchers from the Massachusetts Institute of Technology (MIT), the Massachusetts General Hospital, and the University of California, San Diego, are analyzing Molaison’s brain today. One of the most famous case studies in psychology is that of Henry Molaison (left), who was known in the literature as “the amnesic patient H.M.” until his death in 2008. For more than 50 years, Molaison allowed psychologists to evaluate his memory deficits resulting from brain surgery. After his death, scientists like Jacopo Annese (below) of the University of California, San Diego, began a careful examination of Molaison’s brain that continues today.

Enlarge Image

Photos of H.M. by Suzanne Corkin. Copyright 2013 by Suzanne Corkin, used by permission of The Wylie Agency LLC.; s44/ZUMA Press/Newscom How could you use the case study method to learn about cyberbullying? You could conduct a case study of Aydin Coban, a man from the Netherlands who in 2017 was sentenced to almost 11 years in prison for engaging in online fraud and blackmail involving 34 young women. He also faces five charges in relation to the case of Amanda Todd, a teenager from British Columbia who tragically ended her own life after becoming a victim of online sexual exploitation, harassment, and aggression. To conduct your case study, you would want to gather as much information as possible about Coban, possibly by interviewing him and other people who know him, viewing legal documents, and observing media accounts. Aydin Coban is currently serving a prison sentence for cyber-stalking and sexually exploiting dozens of teenage girls from around the world. He is also accused of being the online tormentor of Amanda Todd, a teenage girl from British Columbia who took her own life in 2012.

Enlarge Image

The Sun/News Licensing What are the advantages of using the case study method to learn about cyberbullying? Although cyberbullying is far from rare, this example is an extreme case that resulted in a criminal trial, conviction, and maximum sentence, and the case study method is well suited to learning about unusual situations. Findings from case studies can also lead to the generation of new hypotheses that can then be tested using more rigorous methods. For example, by examining the details of this case, you might better understand the complexities and challenges involved the criminal prosecution of online behaviour, and you might hypothesize that knowledge of these legal barriers contributes to the sense of helplessness experienced by victims of cyberbullying. You could then design a study that would allow you to test this hypothesis. Naturalistic Observation Naturalistic Observation If you are interested in learning about larger groups of people than are possible with the case study method, you might pursue naturalistic observation, or in-depth study of a phenomenon in its natural setting. Compared to the case study method, we are looking at a larger group of people, which will strengthen our ability to apply our results to the general population. We also have the advantage of observing individuals in their natural, everyday circumstances. Jane Goodall used naturalistic observation to illuminate the world of the chimpanzee.

Avalon/Bruce Coleman Inc/Alamy Stock Photo A classic example of the method of naturalistic observation is the careful, long-term study of chimpanzees conducted in their habitat by Jane Goodall. In the summer of 1960, Goodall, then 26 years old, began her painstaking observations of chimpanzees living in Gombe National Park in Tanzania. Among her discoveries was that chimpanzees were not vegetarians, as previously assumed (Goodall, 1971, p. 34): I saw that one of them was holding a pink-looking object from which he was from time to time pulling pieces with his teeth. There was a female and a youngster and they were both reaching out toward the male, their hands actually touching his mouth. Presently the female picked up a piece of the pink thing and put it to her mouth: it was at this moment that I realized the chimps were eating meat. As a result of Goodall’s years spent following the chimpanzees, scientists have a rich, accurate knowledge of the behaviour of these animals in the wild. Impressed by Goodall’s results, you plan to pursue further knowledge about cyberbullying by examining the language used in the public Twitter posts of the people you follow on Twitter. First, you would need to operationalize cyberbullying, deciding exactly what types of language you would count as being bullying in nature, whether you are going to also include messages that have images or videos, and so on. You would then code each tweet as being either bullying in nature or not bullying. As in Goodall’s case, this approach has the advantages of providing insight into natural, real-world behaviours with large numbers of participants. Some naturalistic observations are conducted when people know that they are being observed, while in other cases, people are unaware of being observed. Both situations raise challenges. If we know we are being observed, we might act differently. But watching people who do not know that they’re being watched raises ethical issues, which will be explored later in this chapter. In the case of our Twitter study, everyone knows that tweets are public. But how do you think your Twitter followers would feel if they discovered they were unwitting participants in your study? The use of naturalistic observation illustrates the importance of choosing a method that is well suited to the research goals. Like the case study method, naturalistic observation can be helpful for developing hypotheses, but other methods must be used to test them. For example, you might observe that Twitter posts coming from users with anonymous accounts appear to be more bullying in nature compared to posts from users who are using their real names. Such an observation may then lead to a hypothesis about the role of anonymity and cyberbullying. Most hypotheses in psychology look at the relationships between two or more concepts, such as the degree of anonymity and likelihood of cyberbullying. Testing a hypothesis would allow you to say whether a relationship between these variables exists, how strong the relationship is, what direction it goes in, and so on. It might appear to you from your Twitter observations that the people with anonymous accounts are engaged in more cyberbullying, but you have no way to demonstrate your point. People who use anonymous accounts may differ from people who use their real names in a lot of different ways—perhaps these accounts are used by multiple individuals or they are “bot” accounts and not real people at all. With only your naturalistic observations to go on, you can’t say for sure that being anonymous makes individuals more prone to online aggression. Data collected through surveys can help researchers understand more about the prevalence and nature of cyberbullying.

SpeedKingz/ Shutterstock.com The Survey Surveys, or questionnaires, allow us to ask large numbers of people questions about attitudes and behaviour. Surveys provide a great deal of useful information quickly, at relatively little expense. Commercial online survey services make conducting surveys easier than in the past. One of the primary requirements for a good survey is the use of an appropriate sample, or subset of a population being studied. The population consists of the entire group from which a sample is taken. Good results require large samples that are typical, or representative, of the population that we wish to describe. Major pollsters, like the Pew Research Center and Gallup, take great pains to recruit survey participants who mirror the characteristics of the public across factors such as gender, age, race or ethnicity, education, occupation, income, and geographical location. People taking surveys might be more interested in pleasing others or appearing “normal” than in answering honestly.

Dusit/ Shutterstock.com Surveys use self-reporting, so results can be influenced by people’s natural tendency to want to appear socially appropriate (Corbett, 1991). As we will discover in Chapter 13, people have strong tendencies to conform to the expectations of others. In some surveys, this factor is not a problem. If you ask people whether they prefer lattes or mochas, you will probably get a fairly honest answer. However, when people believe that their true attitudes and behaviours will not be viewed favourably by others, they are more likely to lie, even when their answers are confidential and anonymous. Focus Groups and Interviews Surveys have the benefit of allowing researchers to collect information from large groups of people in a very efficient manner, as the data can be collected relatively quickly and at very little cost. However, part of what makes the survey method so efficient is that it often places restrictions on the type of data that is collected. For example, participants are often presented with items (e.g., “I feel sad after scrolling through my social media accounts”) that require a specific type of response (e.g., a numeric response from 1 = strongly disagree to 7 = strongly agree). While there is something to be gained by knowing whether the average response to such an item is closer to 1 or closer to 7, there is also something lost by asking people to respond to such a question with a single number. Focus groups and interviews are used to gather more detailed, nuanced information from research participants. In both interviews and focus groups, participants are asked to respond to specific questions or prompts, similar to the survey method. But unlike surveys, participants are not constrained to some preselected choice of response options. Rather, participants are free to provide any response they wish. Often, a researcher might use the survey method to gather initial information on a phenomenon from a large group of individuals, and then use the findings from the survey to develop prompts for use in interviews or focus groups. Let’s see how scientists have used these methods to investigate cyberbullying. Researchers from the University of Toronto conducted a descriptive study assessing the victimization, perpetration, and witnessing of cyberbullying among Canadian university students (Mishna et al., 2018). Over 1300 students responded to an online survey that asked them about their experiences with cyberbullying and the impact of these experiences on their mental health, along with sociodemographic information. At the end of the survey, the respondents were asked to indicate whether they were interested in further discussing their online/social media experiences, and the researchers then followed up with these individuals by conducting focus groups and interviews. As shown in Figure 2.5, findings from the survey revealed that 28 percent of respondents had been sent angry, vulgar, threatening, or intimidating messages. Focus group data revealed that students believed certain groups of individuals (e.g., women, people of colour) were more likely to be victims than others, and that victims with pre-existing mental health challenges were likely to experience more acute levels of distress following an instance of cyberbullying. The students also expressed a desire for more accessible resources on cyberbullying, including mental health support, as well as information on school policies and how to report incidents. Figure 2.5 Results from a Survey on Cyberbullying among Canadian University Students. Faye Mishna and colleagues (2018) asked university students whether they had been the victim of various types of cyberbullying. The most frequently reported form of cyberbullying was being sent rude or intimidating messages. Interestingly, almost half of the students reported experiencing some other type of negative online experience not included in the options listed here. Across each of these forms of cyberbullying, the most commonly reported perpetrator was a friend of the victim.

Enlarge Image

Source: Adapted from F. Mishna et al. (2018), “Social media, cyber-aggression, and student mental health on a university campus,” Journal of Mental Health, 27(3), 222–229. doi: 10.1080/09638237.2018.1437607. Diverse Voices in Psychology How Do We Recruit Diverse Research Participants? If we want to Generalize our conclusions to “people,” it is very important for us to sample from the population of “people” rather than depending on handy convenience samples of undergraduate students enrolled in psychology courses. Given limited resources (time and money), how do psychological scientists reach a diverse sample of participants? You might think that using online recruitment, such as MTurk and Survey Monkey, could solve this problem, but that does not appear to be the case (Maner, 2016). These participants are probably not typical of the adult population in terms of their education and understanding of technology, and they have been exposed to the research methods used by behavioural scientists. Jon Maner (2016) makes a strong case for conducting studies in the field as a way of recruiting more diverse and representative samples. He describes a study of diet and exercise effects on diabetes conducted in 27 geographically diverse clinical centers. Not only were the researchers able to find a large sample (over 3000 people participated), but 45 percent of the sample identified themselves as members of underrepresented minority groups. An additional advantage of these field studies, according to Maner, is their stronger likelihood of being replicable. In another section of this chapter, we explored the controversy over the replicability of psychological research.

Rawpixel/istock/Getty Images Psychological scientists continue their search for methods that will provide more accurate results. Ensuring that a research sample is diverse, mirroring the population, is an important step in this direction.

2-2c Correlational Methods Correlations measure the direction and strength of the relationship between two variables, or factors that have values that can change, like a person’s height and weight. Correlations allow psychologists to explore whether hours of sleep are related to academic achievement and study concentration in university students (van der Heijden et al., 2018) or whether the severity of autistic traits in typically developing males is related to testosterone levels (Tan et al., 2018). If you’re curious about the results of these studies, the first showed that chronic sleep reduction was related to lower grades and reduced study concentration, and the second showed no relationship between autistic traits and testosterone levels. We begin our analysis of correlations by measuring our variables. A measure answers the simple question of “how much” of a variable we have observed. After we obtain measures of each variable, we compare the values of one variable to those of the other and conduct a statistical analysis of the results. Three possible outcomes from the comparison between our two variables can occur: positive, negative, or zero correlation. In a positive correlation, high levels of one variable are associated with high levels of the other. Height and weight usually show this type of relationship. In most cases, people who are taller weigh more than people who are shorter. Two variables also can show a negative correlation, in which high values of one variable are associated with low values of another. For example, high levels of alcohol consumption among postsecondary students are usually associated with low GPAs. The third possible outcome is a zero correlation, in which the two variables have no systematic relationship with each other. When variables have a zero correlation, knowing the value of one variable does not tell us anything about the value of the other (see Figure 2.6). For example, emergency room and law enforcement personnel are often convinced that they are busier with emergencies and crime on nights with a full moon. In contrast, numerous scientific studies of lunar cycles show zero correlation with emergency room admissions, traffic accidents, or other types of trauma (Stomp, ten Duis, & Nijsten, 2011). Figure 2.6 Correlations Describe the Direction and Strength of Relationships between Two Variables. (a) In positive correlations, high levels of one variable are associated with high levels of the other variable. (b) In negative correlations, high values of one variable are associated with low levels of the other variable. (c) In zero correlations, the two variables do not have any relationship with each other.

Enlarge Image

© Cengage Learning “When you’re reading or listening to the news, watch for the use of words like link, association, or relationship used to describe two variables, such as a headline that states ‘Lack of sleep linked to depression’ or ‘Drinking red wine associated with lower rates of heart disease.’ These key words usually mean that the data are correlational but are often mistaken for causal. Now you know how you should—and should not—interpret these reports.” Correlational research results are frequently misunderstood. Correlations permit us to discuss the relationships between two variables but tell us nothing about whether one variable causes changes in the other. For example, one study examining Canadian children from ages 10 to 17 has shown that children’s preferences for playing mature and violent video games are positively correlated with their perpetration of cyberbullying (Dittrick et al., 2013). However, we certainly cannot say that playing violent video games causes cyberbullying. This conclusion may seem reasonable and possibly true, so why must we abandon it? First, the two variables in a correlation can influence each other simultaneously. Perhaps playing violent video games increases the tendency to engage in aggressive online behaviours, but engaging in cyberbullying may also draw youth to playing violent video games. Second, we might be observing a situation in which a third variable is responsible for the correlation between our two variables of interest. It might be the case that children who have been victims of aggression themselves are more likely to both prefer violent video games and engage in cyberbullying. Or perhaps children who score higher on antisocial personality traits are more likely to engage in both of these activities. Antisocial tendencies may be a third variable that predisposes both a preference for violent video games and a tendency to engage in cyberbullying (see Figure 2.7). Figure 2.7 Third Variables and Correlations. Third variables can be responsible for the correlation that we observe in two other variables. In our example of cyberbullying and video game preferences, having antisocial tendencies could be a third variable that predicts both a choice of violent games and a tendency to engage in cyberbullying. The possibility of third variables is one reason that we must be careful when we reach conclusions based on correlational data.

© Cengage Learning If we cannot make conclusions about causality using correlations, why would we use them? In a number of circumstances, correlations are more appropriate than other research methods (see Figure 2.8). For example, it would be unethical to expose young men and women to different numbers of stressful life events. Instead, we can ask these individuals to report on how many stressful life events they have experienced in the past five years, which can then be correlated with various mental health outcomes. Although this method will not allow us to conclude that stressful life events cause outcomes such as depression, we can at least identify the strength and direction (positive, negative, or zero) of any correlation between number of stressful life events and symptoms of depression (see Figure 2.9). Using this approach, researchers have shown that the strength of the relationship between stressful life events and depression depends on genetics, specifically whether individuals are carrying a long (L) or short (S) allele on 5-HTTLPR, a polymorphic region in the gene that codes for the serotonin transporter (serotonin is a neurotransmitter discussed in Chapter 4, and the role of serotonin in depression is discussed in Chapter 14). Figure 2.8 Many Cyberbullying Studies Use the Correlational Method. A search of Google Scholar for articles published between 2015 and 2016 shows that survey and correlational studies dominate this area of research. Typical of this approach is a study by Kowalski and Limber (2013). You can see that youth engaging in traditional bullying or cyberbullying share many outcomes, but appear to differ in their levels of anxiety and suicidal ideation.

Enlarge Image

© Cengage Learning Source: R. M. Kowalski & S. P. Limber (2013). Psychological, physical, and academic correlates of cyberbullying and traditional bullying. Journal of Adolescent Health, 53(1, Supplement), S13–S20. doi: http://dx.doi.org/10.1016/j.jadohealth.2012.09.018. Figure 2.9 Correlations Can Help Us Learn about Situations in Which Experiments would be Unethical. Even though we cannot make conclusions about causes based on correlations, we can obtain useful information. It would be unethical to submit a group of participants to a series of stressful life events. However, the positive correlation that we find between number of stressful life events and symptoms of depression indicates that experiencing stressful events is a risk factor for depression. In the study shown here, researchers also examined whether the relationship between stressful life events and depression symptoms depended on whether participants carried a long (LL) or short (SS, SL) allele on 5-HTTLPR (the polymorphic region in the gene that codes for the serotonin transporter). Consistent with previous research, the relationship between stressful life events and depression symptoms was stronger for participants carrying the S allele versus the L allele.

Enlarge Image

stock-eye/E+/Getty Images Source: Adapted from C. B. Nemeroff (2016). “Paradise Lost: The Neurobiological and Clinical Consequences of Child Abuse and Neglect,” Neuron, 89(5), 892–899. doi: 10.1016/j.neuron.2016.01.019

2-2d Experimental Methods The scientist’s most powerful tool for drawing conclusions about research questions is the formal experiment. Unlike cases in which descriptive methods are used, the researcher conducting an experiment has a great deal of control over the situation. Unlike correlational methods, the use of the formal experiment allows us to talk about cause (see Figure 2.10). Figure 2.10 How to Design an Experiment. A good experimental design features random assignment of participants to groups, appropriate control groups, control of situational variables, and carefully selected independent and dependent variables.

Enlarge Image

© Cengage Learning A researcher begins designing an experiment with a hypothesis, which can be viewed as a highly educated guess based on systematic observations, a review of previous research, or a scientific theory. An experimental hypothesis takes this form: “If I do this, that will happen.” To test the hypothesis, the researcher manipulates or modifies the value of one or more variables and observes changes in the values of others. The variable controlled and manipulated by an experimenter (“If I do this”) is known as the independent variable. We need some way to evaluate the effects of this manipulation. We use a dependent variable, defined as a measure used to assess the effects of the manipulation of the independent variable, to tell us “what will happen” as a result of the independent variable. Like the independent variable, our choice of dependent variable is based on our original hypothesis. After determining our independent and dependent variables, we still have quite a bit of work to do. In most experiments, we want to know how simply going through the procedures of being in an experiment influences our dependent variable. Perhaps the hassle of going to a laboratory and filling out paperwork changes our behaviour. To evaluate these irrelevant effects and establish a baseline of behaviour under the experimental conditions, we assign some of our participants to a control group. In many experiments, the control group will experience all experimental procedures except exposure to the independent variable. When a new treatment is being tested, the control group might experience the standard treatment for a condition. The experience of the control group should be as similar as possible to that of the experimental groups, who experience different values of the independent variable. We want to ensure that our dependent variables reflect the outcomes of our independent variables instead of individual differences among the participants’ personalities, abilities, motivations, and similar factors. To prevent these individual differences from masking or distorting the effects of our independent variable, we randomly assign participants to experimental or control groups. Random assignment means that each participant has an equal chance of being assigned to any group in an experiment. With random assignment, differences that we see between the behaviour of one group and that of another are unlikely to be the result of the individual differences among the participants, which tend to cancel each other out. Individual differences among participants are an example of confounding variables, or variables that are irrelevant to the hypothesis being tested and can alter or distort our conclusions. For example, a researcher might want to test the effects of aerobic exercise on blood pressure. If some participants competed in triathlons without the researcher’s knowledge, their athletic experience would confound the interpretation of the results. Random assignment to groups typically controls for confounds because of these types of individual differences, but other sources of confounds exist. Situational confounds, such as time of day or noise levels in a laboratory, also could affect the interpretation of an experiment. Scientists attempt to run their experiments under the most constant circumstances possible to rule out situational confounding variables. One aspect of cyberbullying that has been examined experimentally is the effect of bystanders (sometimes referred to as “cyberbystanders”) on the likelihood of intervention. As we will discuss in Chapter 13, social psychological research has shown that people are less likely to be offered helped in situations where there are many individuals (bystanders) present compared to few bystanders present. One reason is that people feel less personally responsible for helping when there are lots of other people around. In one study, researchers tested the hypothesis that the number of bystanders in a cyberbullying incident would lead to decreased feelings of personal responsibility, which in turn would reduce intentions to intervene in a cyberbullying incident (Obermaier, Fawzi, & Koch, 2016). As part of the study, all participants viewed a (fictitious) cyberbullying scenario in a university Facebook group, where someone had posted a question and someone else had responded in an insulting manner. The independent variable manipulated by the researchers was how many people had witnessed the incident (whether the post had been viewed by 2 or 5025 people). The dependent variables measured by the researchers included how personally responsible the participants felt and how likely they would be to intervene. Participants were randomly assigned to conditions, resulting in groups that did not differ in terms of gender distribution or age. Results of this experiment confirmed the researchers’ hypothesis: When more individuals had witnessed the cyberbullying, participants felt less responsible and were less likely to intervene (see Figure 2.11). Figure 2.11 Experimental Method in Action. Obermaier, Fawzi, and Koch (2016) randomly assigned participants to view an incident of cyberbullying in a university Facebook group that had been seen by either few (2) or many (5025) individuals. Participants were then asked to report how personally responsible they felt to intervene in the incident and whether they would intervene or not.

Enlarge Image

Researchers attempt to reduce the impact of confounding variables on their results. In a test of the effects of aerobic exercise on blood pressure, situational confounding variables, such as (1) traffic outside the building, (2) a noisy treadmill, and (3) a neighbour breathing heavily, can be controlled by holding the environment as constant as possible for all participants. Individual differences, such as (4) an early morning after little sleep or (5) superior fitness, can be controlled by randomly assigning participants to groups.

Enlarge Image

Avalon/Bruce Coleman Inc/Alamy Stock Photo As powerful as it is, the experimental method, like the other methods discussed previously, has some limitations. Experiments can be somewhat artificial. In the study described above, participants were asked to imagine that they were a part of this university Facebook group where this incident was taking place. Although this pretense is necessary in order to maintain a highly controlled setting, it also results in a rather artificial situation. Participants know that they are in a research study, and they may vary their behaviour as a result. However, making a laboratory experiment more realistic can raise ethical challenges. In a study conducted in 1962, before current ethical guidelines for research had been adopted, military personnel were led to believe that their lives were in danger so that experimenters could realistically assess the effects of panic on performance (Berkun, Bialek, Kern, & Yagi, 1962). Although the responses of these participants were probably quite representative of real life, few of us would want to be put in their position. This type of research could not be conducted under today’s ethical standards, as explained later in this chapter. Another issue with the experimental method arises from differences in the choices of independent and dependent variables. Independent (manipulated) and dependent (measured) variables have to be defined and implemented in some concrete fashion. For your Grade 5 science fair project, you may have varied the amount of water you provided seedlings with, in order to examine the effect on seedling height. Variables like water volume (independent variable) and seedling height (dependent variable) are relatively straightforward to measure and manipulate, as there are concrete and objective methods available (measuring cups and rulers). However, as discussed previously, most psychological research involves the investigation of constructs that require operationalization. In the context of an experiment, this means operationalizing both the independent and dependent variables involved in the study. There are many ways to operationalize variables in practical terms. As we have seen previously, cyberbullying is a construct that can take many forms. In some studies, researchers may require the presence of repeated behaviours (e.g., multiple harassing texts) in order for an action to be considered cyberbullying. In other cases, researchers may decide that a single act can be considered cyberbullying. Research examining physical aggression also employs a range of different operationalizations. Some researchers operationalize aggression in a laboratory setting by measuring how lengthy and loud a sound blast a person is willing to inflict on another person (Anderson & Dill, 2000). One of the odder dependent variables used in video game aggression research is the hot sauce paradigm, in which the amounts of hot sauce that participants choose to be administered to another participant are used to measure aggression (Lieberman, Solomon, Greenberg, & McGregor, 1999). Other researchers might choose different ways to operationalize aggression, such as frequency of physical fights among preschoolers. When the body of research on a topic (such as the role of violent video games on cyberbullying or physical aggression) involves a wide range of different operationalizations, it becomes difficult to make direct comparisons among the many studies. To conduct an experiment, we must carefully operationalize, or define our variables in practical terms. One way to operationalize physical aggression is to measure how often a preschooler has a physical fight with others.

Mary Kate Denny/PhotoEdit Meta-Analyses The point of this discussion is not to convince you that scientists don’t know what they’re talking about, but rather to impress upon you the importance of reviewing research results using your best critical thinking skills. In seeking to understand something as complicated as the science of mind, it is unlikely that any single study could provide complete information about a phenomenon. Instead, progress in our understanding results from the work of many scientists using diverse methods to answer the same question. Conducting a meta-analysis, or a statistical analysis of many previous experiments on the same topic, often provides a clearer picture than do single experiments observed in isolation. Meta-analyses have their own share of challenges, however. A meta-analysis is only as good as the studies on which it is based. Published studies available to researchers conducting a meta-analysis might be subject to publication bias, or the possibility that they are not representative of all the work done on a particular problem. A “file drawer” problem also exists, in which journals are more likely to publish studies that demonstrate significant effects of an independent variable, such as video game violence, on a dependent variable, such as aggression, than studies that show no significant effects. If publication bias is present, the results of any meta-analysis might be misleading. The Importance of Multiple Perspectives As we will see in so many examples in this textbook, even the best research designs might mislead us if they fail to take multiple perspectives into account. Most research on video games and aggression has involved single participants playing alone. What happens when we “zoom out,” using the social perspective to look at the effects of video games on groups of people playing together? Playing video games cooperatively is associated with less subsequent aggressive behaviour, regardless of whether the game played was violent or not (Jerabeck & Ferguson, 2013). Nearly all studies of video games and aggression use participants playing alone. If we add the social perspective by studying people playing together, we find that aggressive behaviour actually decreases following play, regardless of whether the game was violent.

Ruslan Guzov/ Shutterstock.com The research discussed throughout the remainder of this textbook has been subjected to considerable skeptical review by other experts in the field. Most has stood the dual tests of peer review and replication by others. Converging evidence from descriptive, correlational, and experimental research provides us with confidence in our conclusions. Psychology, like any science, has followed its share of wrong turns and dead ends, but most knowledge presented here has been carefully crafted to present the most accurate view possible of behaviour and mental processes. Psychology as a Hub Science Testing the Effects of Food Additives on Children’s Hyperactivity The methods described previously can be used by psychologists to provide guidance across many disciplines. For example, how many times have you heard a parent complain about a child’s out-of-control behaviour and blame it on too much sugar? How would a scientist know whether sugar or any other food ingredient affected children’s behaviour? Scientific discoveries in this area could benefit the fields of nutrition, health, medicine, and child development.

Adapted from “Mapping the Backbone of Science,” by K. W. Boyack et al., 2005, Scientometrics, 64(3), 351–374. With kind permission from Springer Science+Business Media. The gold standard for demonstrating the objective effects of any substance, whether a food additive, medication, or recreational drug, is the double-blind procedure. This procedure requires a placebo, an inactive substance that cannot be distinguished from a real, active substance. Double-blind, placebo-controlled studies have provided insight into the relationship between common food additives in fruit drinks and hyperactivity in children.

DenisNata/ Shutterstock.com The first “blind” aspect of this procedure is the inability of participants to know whether they have taken a real substance or a placebo. This feature controls for effects of the participants’ expectations. When we drink coffee, for example, we expect to feel more alert, or when we take an aspirin, we expect our headache to disappear; therefore, we may “feel” better long before the substance has had time to produce effects. Not letting subjects know whether they received the real substance or the placebo helps offset these misleading effects. The second “blind” is achieved when the researchers also do not know whether a participant has been given a real substance or a placebo until the experiment is over. This aspect ensures that the researchers’ expectations do not tilt or bias their observations. If scientists expect participants to act more alert after drinking coffee, for example, this bias could be reflected in their observations and conclusions. Returning to the question of food additives, what do double-blind, placebo-controlled studies have to say about their effects on child behaviour? In one careful study, young children were given drinks that either had no additives (placebo) or a combination of colourings and preservatives used frequently in packaged foods (McCann et al., 2007). Because it was a double-blind study, the children did not know which drink they had received, and the researchers responsible for observing the children did not know which drink had been served to each child. The outcome of this study showed that general measures of hyperactivity, or unusually large amounts of movement, were higher in the groups that had consumed the additives than in the group that had consumed the placebo. This finding has implications for attention deficit hyperactivity disorder (ADHD), which is discussed in Chapter 14. The ability of these common food additives to make normal children more hyperactive is cause for concern and worthy of further study (see Figure 2.12). Figure 2.12 Hyperactivity and Food Additives. The results obtained by McCann et al. (2007) showed that hyperactivity was higher in children who consumed one of the drinks containing common food additives (left and middle bars) than in children who consumed the placebo drink containing no additives (right bar).

© Cengage Learning

2-2e How Do We Study the Effects of Time? Modifications of the methods discussed previously might be necessary for answering specific questions. Psychological scientists frequently ask questions about normal behaviours related to age. Some research has shown that cyberbullying increases as people move from childhood to late adolescence/early adulthood and then decreases as people age (Barlett & Chamberlin, 2017). Particular forms of cyberbullying may also vary across the lifespan. As Chapter 11 will show, aggression in young children means something different from aggression in a 16-year-old. Considering the context of age-related change adds a new and useful dimension to our hypothesis, but it requires additional attention to the research methods to be used. Cross-sectional studies usually show that intelligence scores decrease with age. These results are most likely a cohort effect. Performance on IQ tests has risen approximately 3 points per decade over the past 100 years, for reasons that are not fully understood (Flynn, 1984). Psychologists have three specific techniques for assessing the normal behaviours associated with age: cross-sectional, longitudinal, and mixed longitudinal designs. To do a cross-sectional study, we might gather groups of people of varying ages and assess both their exposure to violent video games and their levels of physical aggression. We might be able to plot a developmental course for age-related differences in both video game exposure and aggressive behaviour. However, the cross-sectional method introduces what we refer to as cohort effects, or the generational effects of having been born at a particular point in history. Being 20 years old in 1959 was different from being 20 years old in 1989 or 2019 because of a variety of cultural influences. Today’s 10-year-olds, who do not know of a time without the Internet, might respond differently to violent video games than today’s 50-year-olds, for reasons that have nothing to do with age. Any such cohort effects could mask or distort our cross-sectional results. A method that lessens this dilemma is the longitudinal study, in which a group of individuals is observed for a long period (see Figure 2.13). For example, the Fels Longitudinal Study began in 1929 to observe the effects of the Great Depression on children, and it now has enrolled great-grandchildren of the original participants. To use the longitudinal method to answer our question, we could start with a group of infants and carefully plot their exposure to violent video games and their levels of physical aggression into adulthood. The longitudinal approach has few logical drawbacks but is expensive and time consuming. Participants drop out of the study because of moves or lack of incentive. Researchers then must worry about whether those who remain in the study still comprise a representative sample. Figure 2.13 Special Designs Let Us See Behaviours Associated with Age. Longitudinal designs control for the cohort effects that are often seen in cross-sectional designs. This longitudinal study shows that verbal ability and verbal memory are fairly stable over the lifetime, but that perceptual speed gradually worsens with age.

Rocksweeper/ Shutterstock.com; © Cengage Learning Source: Adapted from K. W. Schaie (1996), Intellectual development in adulthood: The Seattle Longitudinal Study, p. 271, Cambridge, UK: Cambridge University Press. The third approach, the mixed longitudinal design, combines the cross-sectional and longitudinal methods. Participants from a range of ages are observed for a limited time (usually about five years). This approach is faster and less expensive than the longitudinal method and avoids some of the cohort effects of the pure cross-sectional method. Summary 2.2 Principles of Research Methods Research method Strengths Weaknesses Descriptive methods

Case study

Photos of H.M. by Suzanne Corkin. Copyright 2013 by Suzanne Corkin, used by permission of The Wylie Agency LLC., s44/ZUMA Press/Newscom Can explore new and unusual phenomena; can falsify a hypothesis

2-3 How Do We Draw Conclusions from Data? Asking the right questions and collecting good information are only the beginning of good science. Once we have collected our results, or data, we need to figure out what those data mean for our hypotheses and theories. The interpretation of data is not an arbitrary act—scientists follow specific rules when drawing their conclusions.

2-3a The Importance of Valid and Reliable Measures Data are only as good as the measures used to obtain them. How would we know whether a measure is good or bad? Two standards that any measure must meet are reliability and validity. A valid measure actually measures what it is supposed to measure. In this case, your bathroom scale is supposed to tell you how much you weigh.

PhotoAlto/Alamy Stock Photo Reliability refers to the consistency of a measure. There are several meanings of reliability in science, including test–retest, interrater, inter-method, and internal consistency. For example, Canadian initial attack wildland firefighters must pass a fitness test every year known as the Ontario Wildland Firefighter Fitness Test circuit (WFX-FIT). It would be very problematic if someone who passed the test one day failed the test the next day. However, test–retest reliability of the WFX-FIT has been shown to be very good, just as one would hope (Gumieniak, Gledhill, & Jamnik, 2018a). While appropriate training can lead to improvements in test scores over time, scores on consecutive (back-to-back) days tend to be highly consistent. Good measures also show high interrater reliability, or consistency in the interpretation of a measure across different observers. You can imagine how distressing it would be if, for example, one lab identified you as having a fatal disease on the basis of a blood test and another did not. Inter-method reliability describes the positive correlation of several approaches to measure a feature in an individual. Returning to the WFX-FIT example, the test itself employs various measures of physical fitness (e.g., cardiovascular endurance, muscular strength), which we would expect to be positively correlated with one another. The existence of such correlations supports the reliability of the measure. Finally, internal consistency results from measures within a single test that positively correlate with one another. Validity means that a measure leads to correct conclusions or evaluates the concept that it is designed to do. For example, your bathroom scale is supposed to measure how much you weigh. The data obtained from your bathroom scale can lead you to a valid conclusion (“this is how much I weigh”) or an invalid conclusion (“wow, I’m much lighter than the doctor’s scale said I am”). How would we determine whether a measure leads to valid conclusions? One approach is to see whether a measure correlates with other existing, established measures of the same concept. As mentioned previously, the WFX-FIT is a job requirement for wildland firefighters. The test is supposed to measure a firefighter’s ability to carry out the extremely physically demanding tasks of the job. We would therefore expect scores on this test to correlate with other established measures of physical fitness, such as the Police Officer’s Physical Abilities Test (POPAT). In an attempt to assess the validity of the WFX-FIT, researchers at York University have examined how the WFX-FIT circuit compares to the real-world demands of fighting wildfires. Validity was assessed by comparing the oxygen cost of performing the WFX-FIT with the oxygen cost of performing actual on-the-job firefighter tasks. They also had firefighters rate the similarity of the WFX-FIT to their actual on-the-job tasks. Results from both of these measures provided support for the validity of the WFX-FIT (Gumieniak, Gledhill, & Jamnik, 2018b). Reliability and validity are not the same. You can obtain a consistent result (reliability) that lacks meaning (validity), but a measure cannot be valid without also being reliable. For example, if you weigh 90 kilograms and your bathroom scale consistently reports that you weigh 68 kilograms, whether you’re looking at the number or your roommate reads it for you, the scale has reliability but not validity. If you get a wildly different number each time you step on the scale, you have neither reliability nor validity. The measure is not consistent (no reliability) and also fails to measure the construct—weight in this case—that it is designed to measure (no validity).

2-3b Descriptive Statistics Just as we might explore a new research topic using the descriptive research methods outlined earlier, we can use descriptive statistics to explore the characteristics of the data obtained from our research. Descriptive statistics help us organize individual bits of data into meaningful patterns and summaries. For example, if we wanted to investigate trends of female postsecondary students in enrolled in STEM fields over time, we would need to start with a way of summarizing our data in a meaningful way. A spreadsheet with thousands of rows is not a helpful way of summarizing or communicating information. Descriptive data tell us only about the sample we have studied. To determine whether the results from our sample apply to larger populations requires additional methods, as described later in this chapter. Many graduate and professional school programs use the GRE or other standardized tests as part of their admissions criteria. Evaluating the resulting data is easier if we use descriptive statistics to summarize the performances of the thousands of individual students who take the tests.

29september/ Shutterstock.com We might approach this mass of data first by asking how many students are currently enrolled in each field of study, and then examine this separately for males and females. The result of our work would be a frequency distribution. We often illustrate frequency distributions with a bar chart, or histogram, like the one shown in Figure 2.14. Figure 2.14 Frequency Distributions. Descriptive statistics, such as these frequency distributions of Canadian postsecondary students’ chosen field of study, allow us to see meaningful patterns and summaries in large sets of data.

Source: Adapted from Statistics Canada, Postsecondary enrolments, by registration status, institution type, sex and student status. https://www150.statcan.gc.ca/t1/tbl1/en/tv.action?pid=3710001801. Reproduced and distributed on an “as is“ basis with the permission of Statistics Canada. Central Tendency Frequency distributions are a useful starting place, but for certain types of data we might also be interested in identifying the “average” score on our measures, or the central tendency of our data set. There are three types of measures for central tendency: the mean, median, and mode for each group of scores. The mean is the numerical average of a set of scores, computed by adding all the scores together and dividing by the number of scores. Many postsecondary students may find themselves taking the GRE, or Graduate Record Examination, which is a requirement for many graduate school programs. The GRE is composed of three subtests: verbal reasoning, quantitative reasoning, and analytical writing. Scores for the verbal and quantitative reasoning tests fall between 130 and 170. Scores on the analytical writing component range from 0 to 6. For the 1.7 million students who took the GRE between 2014 and 2017, the mean on the verbal reasoning test was 150.05, the mean on the quantitative reasoning test was 152.8, and the mean on the analytical writing measure was 3.5 (Educational Testing Service, 2018). The median represents a halfway mark in the data set, with half of the scores above it and half below. The median is far less affected by extreme scores, or outliers, than the mean. In our GRE data, the median scores—150.75 for verbal reasoning, 153 for quantitative reasoning, and 3.75 for analytical writing—are quite close to the mean scores. Why, then, would you ever need to consider a median? In some sets of results, you might find some extreme scores that could affect the mean. For example, if you asked employees at a small business to report their current annual salaries, you might get the following results: $40 000, $45 000, $47 000, $52 000, and $350 000 (we’re assuming this is the boss). The mean in this example would be $106 800, but that figure doesn’t provide a good summary of these numbers. The median, $47 000 in this case, is more representative of your overall results. Look at Figure 2.15 for another example. Figure 2.15 What New Information Can We Learn from a Median? In many cases, like the GRE data, means and medians are similar. In other cases, such as income, these two measures of central tendency provide different pictures. This graph shows mean and median monthly incomes for different countries in 2010 (in international $).

Source: Adapted from Our World In Data (2015). Mean versus median monthly per capita expenditure or income (2014). Published online at OurWorldInData.org, https://ourworldindata.org/grapher/mean-versus-median-monthly-per-capita-expenditure-or-income The mode refers to the score that occurs most frequently, and it is easy to determine by looking at a histogram. The usefulness of the mode depends on the research question that you are asking and the type of data that you are dealing with. In the case of the GRE, the mode provides little interesting information. But if we return to the data presented in Figure 2.14, we can see that the most common (or modal) field of study for male postsecondary students is architecture, engineering, and related technologies. For female students, the most common field of study is business, management, and public administration. For categorical data like this, the mode is the only measure of central tendency that we have at our disposal, since we can’t calculate an “average” field of study the way that we can calculate an average score or income. The mode is also an advantage over a mean or median when there is more than one mode in a set of data. For example, the mean age of onset for anorexia nervosa, which is discussed in Chapter 7, is 17 years of age, but the distribution is bimodal, which means that it has two substantial modes. One peak occurs around the age of 14, when many teens struggle with their changing shapes, and another occurs around the age of 18, when many teens leave home and make food choices without the watchful eyes of their parents (Halmi, Casper, Eckert, Goldberg, & Davis, 1979). An intervention program designed to coincide with the most likely ages of onset would probably be more effective than one timed to coincide with the mean age of onset (see Figure 2.16). Figure 2.16 What New Information Can We Learn from a Mode? The average age of onset for the anorexia nervosa is 17 years, but this measure masks the important fact that age of onset shows two modes—one at 14 years and a second at 18 years. For public health officials wishing to target vulnerable groups for preventive education, the modes provide better information than the mean.

Enlarge Image

Custom Medical Stock Photo/Alamy Stock Photo; © Cengage Learning Source: Adapted from K. A. Halmi, R. C. Casper, E. D. Eckert, S. C. Goldberg, and J. M. Davis (1979), “Unique Features Associated with Age of Onset of Anorexia Nervosa,” Psychiatry Research, 1(2), 209–215. Variance In addition to being curious about central tendency, we might want to know how clustered our scores are. The traditional way to look at the variance of scores is to use a measure known as the standard deviation, which tells us how tightly clustered around the mean a group of scores is. A smaller standard deviation suggests that most scores might be found near the mean, whereas a larger standard deviation means that the scores are spread out from the mean. Returning to our salary example, we had five salaries with a mean of $106 800. To obtain the standard deviation, which is easy to do with a calculator, you subtract each salary from 106 800, square each difference (to eliminate minus signs), add the squares, find the mean of those differences by dividing the total by five (the number of salaries), and take the square root of the result. In this case, we end up with a standard deviation of 136 021, which means that the average difference between a salary and the mean of the salaries is 136 021. If we discard the extreme salary ($350 000) and find the standard deviation of the remaining four salaries, it turns out to be smaller: 4966.56. These results suggest that the distribution of the first four salaries is tightly clustered, whereas the distribution of all five salaries is more spread out. The Normal Curve Many measures of interest to psychologists, such as scores on intelligence tests, which are discussed in Chapter 10, appear to form a normal distribution, illustrated in Figure 2.17. The ideal normal curve in this illustration has several important features. First, it is symmetrical. Equal numbers of scores should occur above and below the mean. Second, its shape indicates that most scores occur near the mean, which is where our measure of variability plays a role. In the standard normal curve, shown in Figure 2.17(a), 68 percent of the population falls within one standard deviation of the mean, 95 percent falls within two standard deviations, and 99 percent of the population falls within three standard deviations. Instruments for assessing intelligence, discussed in Chapter 10, frequently convert raw scores earned by many participants to fit a normal distribution with a mean of 100 and a standard deviation of 15. As a result, we would expect 68 percent of test takers to receive an IQ score between 85 and 115. Another 95 percent would most likely score between 70 and 130, leaving only 2.5 percent to score above 130 and another 2.5 percent to score below 70. Figure 2.17 The Normal Curve. Many measures of interest to psychologists take the approximate form of a normal distribution. These graphs compare a standard normal curve, shown in (a), to the distribution of scores on a standardized test of intelligence, the Wechsler Adult Intelligence Scale, shown in (b).

© Cengage Learning Descriptive Statistics with Two Variables In our discussion so far, we have been describing single variables, such as chosen field of study, salaries, and GRE scores. In psychological research, we often want to describe the relationships among multiple variables. We can illustrate the relationship between two variables in a scatterplot like the one shown in Figure 2.18. Each dot represents the intersection between scores on two variables of interest. For example, some people have argued that the GRE is not a useful predictor of success in graduate school. One measure of success in graduate school is the number of conference presentations given. In the scatterplot in Figure 2.18, researchers in the biomedical field examined the relationship between GRE quantitative scores and number of presentations students gave in graduate school (Moneta-Koehler, Brown, Petrie, Evans, & Chalkley, 2017). As we can see, there appears to be no relationship between these two variables. Figure 2.18 A Scatterplot. A scatterplot allows us to visualize the relationship between two variables, such as the GRE quantitative score and the number of presentations for biomedical graduate students at Vanderbilt University Medical School. In this case, there is no relationship between the two variables. GRE scores are not a useful predictor of how many conference presentations a graduate student will deliver during their time in the program.

Source: Adapted from L. Moneta-Koehler, A. M. Brown, K. A. Petrie, B.J. Evans, & R. Chalkley (2017), “The Limitations of the GRE is Predicting Success in Biomedical Graduate School.” PLoS ONE 12(1): e0166742. doi:10.1371/journal.pone.0166742 Although this scatterplot gives a sense that GRE scores and the number of conference presentations have no systematic relationship, we can compute that relationship exactly using a correlation coefficient. Correlation coefficients can range from −1.00 to +1.00. A correlation of −1.00 and a correlation of +1.00 are equally strong but differ in the direction of the effect. A zero correlation indicates that the two variables have no systematic relationship. The closer a correlation coefficient is to −1.00 or to +1.00, the stronger is the relationship between the two variables. When the score is −1.00 or +1.00, the correlation is perfect—all data points follow the pattern. A value between 0 and 1.00 or 0 and −1.00 indicates a correlation direction, but the relationship is not perfect—not every data point conforms to the pattern.

2-3c Inferential Statistics Although we can learn a great deal from descriptive statistics, most research described in this textbook features the use of inferential statistics, so called because they permit us to draw inferences or conclusions from data. Descriptive statistics allow us to talk about our sample data but do not allow us to extend our results to larger groups. To reach conclusions about how our observations of a sample might fit the bigger picture of groups of people, we use inferential statistics. Although inferential statistics can be powerful, we must be cautious about making generalizations from our results to larger populations. To generalize means to extend your conclusions to people outside your research sample. Psychology over the years has been justifiably criticized for its reliance on university students as research participants. This criticism arises because researchers are usually university professors, and the students who attend the university are a convenient source of willing study participants. Today’s psychological scientists recognize that university students do not comprise a representative sample of people, and they go to great lengths to recruit samples of participants that are more diverse in age, race, ethnicity, socioeconomic background, and other demographic variables. Having a diverse sample supports more generalization than using a sample of university students. To illustrate the use of inferential statistics, let’s consider male and female performance on the Ontario Wildland Firefighter Fitness Test circuit (WFX-FIT). In one study examining a sample of normally active general population (non-firefighter) participants, men finished the circuit in an average time of 15 minutes and 33 seconds and women finished the circuit in an average time of 21 minutes and 59 seconds (Gumieniak et al., 2018a). Does this mean that men perform better than women on this fitness test? Or do men and women perform similarly, and this group is just an unusual sample of test takers? The default position, stating that there is no real difference between two measures (completion times produced by men and women in this example), is known as the null hypothesis. Recall that we cannot “prove” a hypothesis to be correct, but we can demonstrate that a hypothesis is false. Rejecting the null hypothesis suggests that alternative hypotheses (there might be a relationship between gender and completion times on the WFX-FIT) should be explored and tested. Connecting to Research Do You Believe in ESP? ESP stands for Extrasensory Perception, with extra in this case meaning “outside” the boundaries of the normal information that we obtain from our various senses, such as vision, hearing, and touch. The study of ESP is part of the larger field of parapsychology, or the study of psychic phenomena lying outside the typical boundaries of the field of psychology described in Chapter 1. Among the abilities grouped into the category of ESP are telepathy (the ability to communicate with other minds without using the usual methods of speaking or writing), clairvoyance (the ability to perceive objects that do not affect the known senses), precognition (knowledge of future events), and premonition (emotional anticipation of future events). In 2005, Gallup pollsters found that 41 percent of Americans said that they believed in ESP, with 25 percent not sure and 32 percent not believing (Gallup Poll News Service, 2005). Of the scientists who are members of the National Academy of Sciences, 96 percent said that they did not believe in ESP (McConnell & Clark, 1991). A study of ESP (Bem, 2011) generated considerable discussion in the scientific community about everything from the statistics that we use to the effects of investigator bias. Evaluating this study provides a good opportunity to practise your critical thinking skills and to apply what you have learned about validity and reliability. Methods A total of 100 undergraduates (50 men and 50 women) participated. Stimuli (both erotic and nonerotic photographs) were selected from a standard set known as the International Affective Picture System. During each trial, participants saw two curtains on a computer screen and were asked to predict which curtain hid a picture. The sequencing of the erotic and nonerotic pictures and the left–right positions was determined by a random number generator after the participant made a selection. This timing was designed to test the precognition of future events (the participants selected a side of the screen before the random number generator selected a location for the picture). Here is where Bem’s methods get a bit murky. The first 40 participants saw 12 erotic pictures, 12 negative pictures (unpleasant images), and 12 neutral pictures. Then, for reasons not well explained in the paper, the method was changed for the remaining 60 participants, who were shown 18 erotic and 18 nonerotic photos, 8 of which were described as romantic but not erotic (e.g., a couple at a wedding). It is quite unusual for researchers to change their methods in the middle of an experiment, and more troubling when there doesn’t seem to be a good reason to do so. The popularity of television shows like Ghost Hunters might be related to the large number of Americans who report a belief in ESP (41 percent). In contrast, 96 percent of the members of the National Academy of Sciences say they do not believe in ESP.

Syfy Channel/Courtesy Everett Collection The Question: Nine experiments involving more than 1000 participants tested different types of precognition and premonition. We will focus our attention on the first experiment, which tested the following hypothesis: Participants should be able to anticipate the position (right side or left side of a computer screen) of an erotic photograph (see Figure 2.19). Is the following study of this hypothesis valid and reliable? Figure 2.19 Evidence of ESP? Participants in Bem’s (2011) study were supposed to predict behind which of two curtains a picture would appear.

Anson0618/ Shutterstock.com Ethics The only potential ethical challenge in this study is the presentation of erotic photos. Potential participants should be warned of this aspect before they agree to continue with the study. Results If we have two choices, we have a 50 percent chance of guessing correctly on each trial. Bem reported that the future position of the erotic images was chosen correctly on 53.1 percent of the trials, and the future position of the nonerotic images was chosen correctly on 49.8 percent of the trials. Bem reported that his results were statistically significant, or were unlikely to happen because of chance. Conclusions Bem concluded that the choices made by his participants were better than chance, supporting his hypothesis that precognition could be demonstrated. What do others think of Bem’s results? James Alcock, writing for the Skeptical Inquirer, concluded that “just about everything that could be done wrong in an experiment occurred here” (2011, p. 31). Among Alcock’s concerns were Bem’s changing of his method midway through the experiment and his questionable use of statistical analyses. As discussed earlier, replication provides an important check on possible researcher bias, and failure to replicate indicates serious flaws in an experiment. So far, the three known replications of Bem’s experiments have failed to produce significant results. Despite the flaws, however, Bem’s experiments have contributed to science by stimulating a lively discussion of scientific and statistical methods. How do we know when a hypothesis should be rejected? Like most sciences, psychology has accepted odds of 5 out of 100 that an observed result is due to chance as an acceptable standard for statistical significance. We can assess the likelihood of observing a result due to chance by repeating a study, like throwing dice multiple times. We could give the WFX-FIT to 100 randomly selected samples of normally active male and female adults. If men and women take about the same time to finish the circuit or women finish faster than men in 5 or more of these 100 samples, we would reject our “Men complete the test faster” hypothesis as false. This type of careful analysis of the WFX-FIT data has confirmed that the baseline (pre-training) differences on the WFX-FIT between males and females are statistically significant, which has led to concerns that the test is discriminatory and would place female firefighters at an unfair disadvantage, since the test uses the same cut-off score (17 minutes and 15 seconds in Ontario) for all individuals (Gumieniak et al., 2018a; see Figure 2.20). However, research has indicated that with training and familiarization of the circuit, the passing rate for females can be significantly increased, though males continue to outperform females on the test (see Figure 2.20). Figure 2.20 Are Male and Female Completion Times on the WFX-FIT Different? Inferential statistics allow us to decide whether the observed differences between the performance of males and females on the WFX-FIT represents a real gender difference or whether they just occur by chance.

Enlarge Image

Source: Gumieniak, R. J., Gledhill, N., & Jamnik, V. K. (2018). Physical employment standard for Canadian wildland firefighters: examining test–retest reliability and the impact of familiarisation and physical fitness training, Ergonomics, doi: 10.1080/00140139.2018.1464213; with permission from Taylor & Francis Ltd, http://www.tandfonline.com. Although psychology and most other sciences have relied heavily on significance testing, this approach has its share of weaknesses. It requires a “yes or no” response—we either reject our null hypothesis or not. The results of a significance test do not tell us anything about how meaningful or large the effect is. For example, the results of a significance test might tell us that there is a difference between male and female completion times on the WFX-FIT test that is unlikely to be due to chance, but it does not tell us anything about the size of that difference. One alternative approach is the use of estimation, which includes a report of the 95 percent confidence intervals in addition to the means for each condition (Cumming, 2012; see Figure 2.21). The additional information provided by the confidence interval helps us evaluate the magnitude of the difference better than a simple reporting of the statistical significance of the difference. Figure 2.21 Confidence Intervals Provide Information about the Magnitude of Differences. Earlier in this chapter, we reviewed a study comparing hyperactivity scores among children drinking two juices containing common food additives (A and B) and plain juice (placebo). Computing 95 percent confidence intervals, noted by the red lines, gives us an idea of how big the differences are (these are actually quite small, but significant). We shouldn’t assume that overlapping confidence intervals indicate a lack of statistical significance. Mixture A was different from placebo at the p < .05 level, while Mixture B was different from placebo at the p < .01 level.

2-4 How Can We Conduct Ethical Research? We mentioned earlier that deceiving participants into thinking they were truly in danger raised ethical questions. On what basis do researchers decide what they can and cannot do to their research participants? Although most studies published in psychological journals involve the use of human participants, psychology also has a rich heritage of animal research. Separate guidelines have been developed for each type of subject. Researchers working in universities and other agencies receiving federal funding seek the approval of research ethics boards (REBs) for human participant research and institutional animal care committees (ACCs) for animal research before conducting their studies. The REBs and ACCs are guided by federal regulations and research ethics endorsed by professional societies such as the Canadian Psychological Association (CPA). REBs and ACCs must include at least one member of the community outside the university or agency, avoiding the possibility that inappropriate research might be conducted in secret. These procedures do not apply to institutions that do not have federal funding, such as private genetics research corporations or consumer product corporations (cosmetics, etc.), although efforts are being made to bring these organizations into compliance with federal standards. As you review the ethical guidelines for both human and animal research participants, keep in mind that the guidelines look simpler when you read about their provisions than when you try to implement them in the context of real research. This is why the final approval decision lies with a committee, as opposed to an individual.

2-4a Human Participants At the core of ethical standards for human research is the idea that participation is voluntary. No participant should be coerced into participating. Although psychologists are well aware that people who volunteer to participate in research are probably quite different in important ways from those who don’t volunteer, we have chosen to give research ethics a higher priority than our ability to generalize research results. Many populations that are interesting to psychologists are unable to sign informed consent forms legally and require additional ethical protection. In the case of research with infants, parents are required to sign informed consent forms on their child’s behalf.

Picture Partners/Alamy Stock Photo To ensure that a participant is a willing volunteer, researchers must make provisions for reasonable incentives. Incentives, such as pay or extra credit for participation, must not be so extreme that they become the primary motivation for prospective volunteers. To decide whether to volunteer for research, a person must have some knowledge of what the research will entail. Researchers must provide prospective participants with an informed consent form, which provides details about the purpose of the study and what types of procedures will occur. In psychological research, we have the added burden of occasionally dealing with participants who are limited in their abilities to provide informed consent because of the conditions that make them interesting to study. Developmental psychologists have an obvious interest in children, but a person cannot sign a legal informed consent form until age 18. Can you obtain informed consent from a patient with schizophrenia, who suffers from hallucinations and irrational, delusional beliefs, or from a person in the later stages of Alzheimer’s disease, whose memory and reasoning have deteriorated because of the condition? In these cases, legal permission must be obtained from a qualified guardian. The university REBs play an essential role in evaluating these ethical dilemmas case by case. The Tuskegee syphilis experiment, conducted by the U.S. Public Health Service from 1932 to 1972, involved 400 African-American men who had contracted syphilis. The study’s failure to treat or inform these participants about their health led to new regulations to prevent such unethical research from being repeated.

Picture Partners/Alamy Stock Photo Source: Department of Health, Education, and Welfare. Public Health Service. Health Services and Mental Health Administration. Center for Disease Control. Venereal Disease Branch (1970–1973)/National Archives. Research also should be conducted in a manner that does no irreversible harm to participants. In some cases, to avoid participants’ desire to appear normal and their tendency to try to outguess the research, researchers might say that they are investigating one factor when they are interested in another. Most cases of deception are quite mild, such as when participants are told that a study is about memory when it is actually a study of some social behaviour. When researchers must deceive their participants, extra care must be taken to debrief participants and answer all their questions following the experiment. Research using human participants should be rigorously private and confidential. Privacy refers to the participants’ control over the sharing of their personal information with others, and methods for ensuring privacy are usually stated in the informed consent paperwork. For example, some studies involve the use of medical records, which participants agree to share with the researchers for the purpose of the experiment. Confidentiality refers to the participants’ right to have their data revealed to others only with their permission. Confidentiality is usually maintained by such practices as substituting codes for names and storing data in locked cabinets. Collecting data anonymously, so that even the researchers do not know the identity of participants, is the surest way to protect privacy and confidentiality. Science learns from its past ethical lapses. One of the most egregious examples of unethical research conducted in the United States was the Tuskegee syphilis experiment, which lasted from 1932 until 1972. Researchers from the U.S. Public Health Service recruited about 400 impoverished African-American men who had contracted syphilis to study the progression of the disease. None of the men were told they had syphilis, and none were treated, even after penicillin became the standard treatment for the disease in 1947. Many current regulations related to research ethics were developed in response to this experiment. While principles such as informed consent and confidentiality apply to all human participants, additional considerations need to be made if research is being conducted in a community where the cultural norms of the researcher may clash with the cultural practices and beliefs of the community. The James Bay Cree of northern Quebec have been the focus of eight psychological studies, and seven of these eight cases resulted in the researchers being ejected from the community (Darou, Hum, & Kurtness, 1993; Darou, Kurtness, & Hum, 2000). Reactivity among cultural groups who become the focus of Western research has been widely documented, and while guidelines surrounding such research exist (e.g., respect local authority, prepare culturally sensitive materials; see Trimble, 1981), there is a lot of variability in how well individual researchers conduct their cross-cultural studies. One of the causes of the reactivity on the part of the Cree has been the researchers’ lack of respect for decisions made by local authorities. Darou, Hum, and Kurtness (1993, p. 327) provide the following example (as told to them by R. F. Salisbury, June 1980) of a research assistant arriving at a remote Cree village: On the first day, he asked the chief for access to subjects. He was refused. He then asked the local school principal. He was again refused. He next asked the minister. The minister agreed. The next day, the principal, chief, and minister met for their regular weekly lunch to discuss community issues. They felt that the research assistant’s actions had caused the potential for conflict among them. The research assistant was then told to “take the next airplane out of the village or sleep in a snow bank.” From the chief’s point of view, an undersupervised and arrogant stranger had come to his village and endangered the social peace. The chief believed that psychologists conduct research just to produce papers and have little or no regard for the well-being of the subjects.
The problem in this case is obvious, and it could have been easily avoided if the researcher had been more respectful, flexible, and patient. Darou, Kurtness, and Hum (2000) emphasize that it is entirely inappropriate for anyone to conduct research with Indigenous participants unless they have been invited into the community and have a clear purpose. The Cree often (and rightly) perceive research as being more beneficial for the researcher than for the community, so the researcher should take steps to correct this problem as much as possible. For example, by doing research on the history of the community using university resources (which the Cree would not normally have access to), the researcher may be able to provide the community with valuable information. While appropriate training is of the utmost importance for anyone conducting research, this is especially true when researchers will be interacting with individuals from culturally distinct groups.

2-4b Animal Subjects The topic of using animals in research is guaranteed to stimulate lively, and possibly heated, discussion. Some people are adamantly opposed to any animal research, whereas others accept the concept of using animals, so long as certain conditions are met. The Canadian Council on Animal Care (CCAC) is the national peer-review organization responsible for setting, maintaining, and overseeing the implementation of high standards for animal ethics and care in science throughout Canada. According to the CCAC, in 2017 mice (31.2 percent) were the most commonly used research animal in Canada, followed by birds (27 percent), and fish (19.1 percent). About 7 to 8 percent of published research in psychology journals involves the use of animals as subjects (American Psychological Association [APA], 2012). A total of 90 percent of the animals used are rodents and birds, with 5 percent or fewer studies involving monkeys and other primates. According to the APA, the use of dogs and cats in psychological research is rare. Ethical guidelines for animal research require setting a clear purpose for the experiment, providing excellent care for the animals, and minimizing pain and suffering.

Agencja Fotograficzna Caro/Alamy Stock Photo In Canada and around the world, the “Three R’s” tenet guides the use of animal research:

  • Replacement refers to research methods that avoid or replace the use of animals in an area of research where they would have otherwise been used.
  • Reduction refers to any strategy that will result in fewer animals being used.
  • Refinement refers to the modification of animal care or experiment procedures to minimize pain and distress (CCAC, 2019). Research using animals must demonstrate a clear purpose, such as benefiting the health of humans or other animals. In addition to serving a clear purpose, animal research requires excellent housing, food, and veterinary care. The most controversial ethical standards relate to minimizing the pain and suffering experienced by animal research subjects. The Canadian Council on Animal Care provides guidelines for the use of pain, surgery, stress, and deprivation with animal subjects, as well as the termination of an animal’s life. The standards approximate the community standards that we would expect from local humane societies tasked with euthanizing animals that are not adopted. Psychology Takes on Real-World Problems Using Field Experiments to Test Strategies for Encouraging Conservation Behaviours One of the major societal challenges we face today is getting people to engage in conservation behaviours, such as reducing household energy use or water consumption. A large body of research supports the idea that social norms messaging campaigns can be effective in getting people to engage in conservation behaviours. As we will discuss in Chapter 13, social norms are generally accepted ways of behaving, thinking, or feeling. Because most people are compelled to follow social norms, providing people with normative information regarding energy or water use (e.g., “most people turn off the tap while brushing their teeth”) can be an effective way of encouraging conservation behaviours. But how do we know such campaigns are effective? We could conduct a survey where we present people with different posters or slogans and ask them to report on which they find the most persuasive, but that wouldn’t tell us anything about whether the campaign would actually lead to behaviour change. We could conduct a laboratory experiment where we place different posters above a sink and see whether it affects the amount of water people use, but even if we found that a particular poster led to reduced water usage, the effect wouldn’t necessarily generalize to real-world settings. And putting a poster above every sink is probably not a particularly practical solution. So what type of research approach would be the most useful in this case? Instead of relying on surveys or laboratory experiments, a majority of research on conservation campaigns is conducted using field experiments. These experiments involve the manipulation of independent variables and measuring of dependent variables, but instead of being conducted in a tightly controlled laboratory setting, these experiments are done “in the field,” such as in a residential neighbourhood. For example, one water conservation campaign involved 45 000 residential homes in California, where droughts, climate change, and other issues have culminated in a long-standing water crisis. Households were randomly assigned to receive different email messages regarding their water usage. Some received no social information (control condition), some received social comparison information with no visual cue, some received social comparison information with a visual cue (cartoon water droplet), and others received social comparison information along with a visual indicator of whether they were doing poorly (sad cartoon water droplet) or doing well (happy cartoon water droplet). Researchers then tracked how much water each household used. Results from the study showed that providing normative information that also included an indicator of social judgment (e.g., happy or sad face) led to the largest reductions in water use. Importantly, by tracking whether the emails were opened or not, the researchers were also able to show that receiving a “frowny face” email message did not appear to upset people or make them want to avoid this information, as they continued to open subsequent water report emails. Results from randomized field experiments like this one can help us figure out the most effective ways of getting people to increase resource conservation behaviour. Examples of the different images used in the household water reports.

Enlarge Image

Images courtesy of Dr. S.P. Bhanot Source: From Bhanot, S.P. (2018). Isolating the effect of injunctive norms on conservation behavior: New evidence from a field experiment in California, Organizational Behavior and Human Decision Processes, https://doi.org/10.1016/j.obhdp.2018.11.002; permission conveyed via Copyright Clearance Center. Summary 2.3 Principles of Ethical Research Human participants

Department of Health, Education, and Welfare. Public Health Service. Health Services and Mental Health Administration. Center for Disease Control. Venereal Disease Branch (1970–1973)/National Archives Animal subjects

Agencja Fotograficzna Caro/Alamy Stock Photo No coercion Necessity Informed consent

2-5 chapter summary Chapter Summary We rely on the careful methods of science in order to answer questions and test hypotheses about the mind and behaviour. Science is not perfect, and as humans, scientists themselves are prone to all sorts of errors and biases. But the scientific method is the best tool we have, and even when it leads us to wrong conclusions, its never-ending, self-correcting nature will eventually steer us in the right direction. Descriptive studies often focus on single variables and can provide psychologists with a starting place for research on a topic. The data collected through descriptive research often leads to claims regarding the frequency or prevalence of a behaviour, and this knowledge can then be used to generate testable hypotheses. Correlational research examines the relationships between two or more variables. The data collected through correlational methods leads to claims regarding the direction and strength of the relationship between two variables. However, in order to make causal claims about relationships between variables, we must conduct experiments where we carefully manipulate one variable and measure its effects on another variable. Through the use of random assignment and controlled settings, researchers attempt to ensure that the independent variable is the only thing that is changing across experimental conditions. This way, if noticeable differences across the conditions are found, we can be confident that these differences are due to our experimental manipulation and not some other factor. Descriptive statistics allow us to summarize and communicate the findings of our research with others. Inferential statistics allow us to use the data we have collected from a sample of participants and make inferences about larger populations. Importantly, we always need to keep in mind that the results of any single study need to be replicated before we should have confidence in their results. A single study, even if it is the most perfectly conducted study, does not tell us much on its own, just as the single stroke of a paintbrush doesn’t reveal the whole painting. And finally, no matter what type of research we are conducting, we must always be sure that it is conducted ethically. Human participants must be able to provide their informed consent and should be fully debriefed at the end of the study. Research that involves both humans and animals should always be done in a way that minimizes harm and maximizes benefits. Advances in psychological science would not be possible without the help of both human participants and animal subjects.

2-5a key terms Key Terms The Language of Psychological Science Be sure that you can define these terms and use them correctly.

  • case study
  • confirmation bias
  • confounding variables
  • constructs
  • control group
  • correlations
  • critical thinking
  • cross-sectional study
  • dependent variable
  • Descriptive methods
  • descriptive statistics
  • double-blind procedure
  • experiment
  • experimental groups
  • field experiments
  • Focus groups
  • generalizations
  • hypothesis
  • independent variable
  • inferential statistics
  • informed consent
  • interview
  • longitudinal study
  • mean
  • measure
  • median
  • meta-analysis
  • mixed longitudinal design
  • mode
  • naturalistic observation
  • normal distribution
  • null hypothesis
  • objectivity
  • operationalization
  • peer review
  • placebo
  • population
  • publication bias
  • random assignment
  • reliability
  • replication
  • sample
  • science
  • standard deviation
  • statistical significance
  • Surveys
  • theories
  • third variable
  • validity
  • variables

chapter 3 intro Chapter Introduction Environmental factors such as stress, diet, smoking, and exercise can influence whether a gene is turned on or off.

Enlarge Image

Argosy Publishing, Inc. Learning Objectives

  1. Analyze the role of genes as the building blocks of human nature.
  2. Appraise the importance of heritability estimates, twin studies, adoption studies, and epigenetic analyses in the field of behavioural genetics.
  3. Compare the roles played by mutation, natural selection, migration, and genetic drift as mechanisms of evolution.
  4. Discuss how the human brain might represent an adaptation for coping with complex social behaviour.
  5. Explain the mechanisms by which intrasexual and intersexual selection might influence the evolution of human behaviour.
  6. Identify the cultural mechanisms by which nature and nurture can interact to influence human behaviour. Growing up, leora eisen and linda lewis (pictured in photo) were identical in almost every way. But as adults, Linda suffered through numerous bouts of cancer, eventually dying as a result of leukaemia at the age of 52. Meanwhile, her identical twin sister, Leora, has remained perfectly healthy. Leora is a Canadian filmmaker who had begun making a documentary about twins just before her sisters’ leukemia diagnosis. While the film, Two of a Kind (90th Parallel Productions), features fascinating stories about other twin pairings, it also documents Leora’s personal struggle to understand why her sister developed cancer and she did not. Scientists have also long been fascinated by identical twins and understanding their similarities and differences. If two people share the same deoxyribonucleic acid (DNA), as identical twins do, why does one get cancer while the other stays healthy? Why does one become obese while the other remains fit? When science began to unravel some mysteries about genes and how they work, answers to these questions emerged. You have the same DNA in each cell of your body, yet some cells develop into heart cells, others into brain cells, and so on. How does the same set of DNA know how to make these different types of cells? Genes, or segments of DNA that produce specific proteins, can be turned on and off. The genes that are not turned off are free to produce the proteins needed to build a particular kind of cell, whether that is a skin cell or a liver cell. Only about 10 percent to 20 percent of the genes in a particular type of cell, such as a skin cell or brain cell, are active.

Courtesy of Leora Eisen Genes do not just turn on and off as they build a body during development. Your ongoing interactions with the environment can also turn genes on or off. What you eat, whether you smoke or drink, your stress levels, and other environmental factors can influence how your DNA works. Our understanding of these ongoing interactions between genes and the environment is the reason psychologists no longer argue about the separate contributions of nature and nurture. In this chapter, we will explore these interactions in more detail. Let’s zoom in to see what’s happening when the environment interacts with DNA. In mice, a gene called Agouti produces yellow fur and obesity when it is turned on, but brown fur and normal weight when it is turned off (Dolinoy, Huang, & Jirtle, 2007). Certain environmental factors can turn the gene on or off. When pregnant mother mice ate food containing bisphenol A (BPA), a chemical found in food and beverage containers, baby bottles, dental sealants, and food cans, their babies had yellow fur and were obese. The BPA turned on the Agouti gene. When pregnant mice are fed a diet containing BPA, found in plastics and other consumer products, their offspring are more likely to have yellow fur and to be obese.

Courtesy Randy L. Jirtle, Ph.D. How does studying the fur colour of mice help us understand the differences between identical human twins? Young twins often have a great deal in common, but as they get older, they are more likely to eat different foods and have different experiences. These environmental influences can change the way their genes are turned on or off, just as the BPA affected the Agouti gene in the mice. These changes accumulate over time, so identical twins become less similar as they age. In this chapter, we will explore how nature and nurture interact to build the mind across the life span, and how the interactions between nature and nurture have been shaping the human mind over millennia.

3-1 Why Do We Say That Nature and Nurture Are Intertwined? Along with an understanding of the structures and processes of the brain, covered in Chapter 4, knowing how our biological history shapes our behaviour is an important part of understanding the mind. Contemporary psychologists view the contributions of nature (our heredity or innate predispositions) and nurture (the results of our experience with the environment) as being closely intertwined, as opposed to somehow competing for control over structure and behaviour. These identical twins probably do not look as similar as they did as small children. The twin on the left is a nonsmoker, while her sister on the right smoked for 29 years.

AP Images/Rex Features/American Society of Plastic Scholars have not always thought about nature and nurture the way we do today. Instead of viewing the actions of nature and nurture as inseparable, earlier scholars talked in terms of nature versus nurture and debated the relative contributions of nature or nurture to a particular type of behaviour. Credit for describing the contrast between heredity and environment as “nature versus nurture” usually goes to Francis Galton (1869), who was Charles Darwin’s cousin. Galton believed that intelligence was largely the result of inheritance, a topic tackled in Chapter 10. Over the next 150 years or so, many thinkers engaged in a highly spirited debate on this question. As Chapter 10 will demonstrate, contemporary psychologists view intelligence as another example of an outcome shaped by both genetic inheritance and environment. Francis Galton was the first to use the phrase “nature versus nurture.”

/Newscom/akg-images We can say with some certainty that the either/or approach to human behaviour has produced some of the most contentious discussions in the history of psychology. Our motive for arguing in favour of the intertwined approach to nature and nurture is intended not to sidestep difficult questions, but rather to support good science. By zooming out to integrate a number of perspectives, both biological and experiential, we can achieve a more accurate understanding of these questions.

3-2 What Are the Building Blocks of Behaviour? Before we explore the interactions between nature and nurture that contribute to psychological phenomena, let’s look at the genetic mechanisms that help shape the mind. Every nucleus in the approximately 37 trillion cells of your body, with the exception of your red blood cells and sperm or eggs, contains two complete copies of the human genome, a set of instructions for building a human. Your personal set of instructions is known as a genotype, which interacts with the environment to produce observable characteristics, known as a phenotype. Some genes have a large number of alleles. Of the more than 500 alleles for the BRCA1 (Breast Cancer 1) gene, a small number are associated with a higher risk for breast and other cancers. American actor Angelina Jolie, who possesses the high-risk alleles, chose to undergo a preventive mastectomy to reduce her chances of developing breast cancer.

Featureflash Photo Agency/ Shutterstock.com One half of your genotype was provided by your biological mother’s egg, and the other half was provided by your biological father’s sperm. Each parent contributes a set of 23 chromosomes, which in turn are composed of many molecules of DNA. A smaller segment of DNA located in a particular place on a chromosome is known as a gene. Each gene contains instructions for making a particular type of protein. Gene expression occurs when these genetic instructions are used to produce a particular protein. Each cell contains the instructions for an entire human organism, but only a subset of instructions is expressed at any given time and location. Gene expression in a nerve cell is different from gene expression in a muscle cell or a skin cell. Twenty-three pairs of chromosomes make up the human genome.

ISM/Phototake Different versions of a gene, or alleles, can give rise to different phenotypical traits. Many alleles can occur for a given gene, but an individual receives only two—one from each parent. For example, alleles for blood type include A, B, and O, but typically, nobody has all three. As shown in Figure 3.1, combinations of your two alleles make your blood Type A (AA or AO), Type B (BB or BO), Type AB (AB), or Type O (OO). Figure 3.1 Genotypes and Phenotypes of Blood Type. The three possible blood type alleles—A, B, and O—can be combined to produce Type A, Type B, Type O, or Type AB blood.

© Cengage Learning If both parents contribute the same type of allele, such as a version of the MC1R gene related to having freckles, the child would be considered homozygous for that gene (homos means “same” in Greek). If the parents contribute different alleles, such as one for freckles from one parent and one related to not having freckles from the other, the child is heterozygous for that gene (hetero means “different” in Greek). Recessive alleles determine a phenotype only when an individual is homozygous for a particular gene, whereas dominant alleles determine a phenotype in either the homozygous or the heterozygous condition. Because alleles for no freckles are recessive and alleles for freckles are dominant, the typical way that an individual has no freckles is if that person receives two copies of the no-freckle allele, one from each parent. Any individual receiving either two freckle alleles or one freckle allele and one no-freckle allele will have freckles (see Figure 3.2). Figure 3.2 Effects of Dominant and Recessive Genes. Having freckles (F) is dominant, whereas having no freckles (f) is recessive. The only way that a child can have no freckles is to inherit two of the recessive “no freckle” alleles, one from each parent. In this example, both parents are heterozygous (Ff) with freckles.

Strauss/Curtis/Corbis/Getty Images; © Cengage Learning Whether you have freckles or not is a simple example of how dominant and recessive genes interact, but one allele does not always dominate another. In a study of gene–environment interactions involving the serotonin transporter gene (SERT) and a child’s response to bullying, the authors note that with two types of alleles (S for short or L for long), individuals could have one of three genotypes: SS, SL, or LL (Sugden et al., 2010). Neither the S nor the L allele dominates the other. As you can see in Figure 3.3, The SL group had levels of emotional disturbance in response to frequent bullying that fell between the extremes of the SS and the LL groups (Sugden et al., 2010). If either the S or the L alleles were dominant, the SL group would behave like that dominant group instead. Figure 3.3 Some Alleles Do Not Show Dominance. Neither the S nor the L allele of the serotonin transporter gene dominates the other. Among children who have been bullied frequently, those with the SL genotype (shown in green) experience a level of emotional problems midway between the levels of those with the SS and LL genotypes. If either the S or the L allele was dominant, we would expect the SL group to behave the same way as the homozygous dominant group.

© Cengage Learning Source: Adapted from K. Sugden et al. (2010). “Serotonin Transporter Gene Moderates the Development of Emotional Problems Among Children Following Bullying Victimization,” Journal of the American Academy of Child and Adolescent Psychiatry, 49(8), 830–840. doi:10.1016/j.jaac.2010.01.024. Psychology Takes on Real-World Problems Using DNA to Find Missing Persons Every year in Canada, approximately 500 missing persons remain missing after one year. Such cases can cause anguish for friends and family members, who may never receive closure regarding their missing loved one. A few years after Judy Peterson’s daughter Lindsey went missing from Vancouver Island in 1993, she requested that her daughter’s DNA be entered into a national database in case Lindsey’s remains had been, or would be, found. Although she was able to use her daughter’s DNA to confirm that Lindsey wasn’t a match to any of the unidentified bodies with the British Columbia Coroners Service, the lack of a national database meant that there was no way for her to continue this check throughout the other provinces and territories. If Lindsey’s remains turned up in neighbouring Alberta, for example, no one would ever know. This changed in March 2018, when a national database for missing persons was finally created. The National Missing Persons DNA Program provides every police force, coroner’s office, and medical examiner’s office across Canada with the ability to exhaust all investigative avenues in missing persons cases. While national DNA databases already existed for criminal investigations (comprising of DNA profiles collected from crime scenes and of offenders convicted of designated offences), the addition of new DNA indices for missing persons, relatives of missing persons, and human remains, as well as victims of crime and voluntary donors, makes the database far more powerful (Royal Canadian Mounted Police, 2018). Although the creation of the database is unlikely to bring good news to grieving family members, the hope is that it will at least allow parents like Judy Peterson to receive some closure. Judy Peterson spent eighteen years fighting for the creation of a national DNA database that would allow for the comparison of DNA from missing persons, such as her daughter Lindsey, with the DNA samples from unidentified bodies in coroner’s and medical examiner’s offices around the country. In March 2018, the National Missing Persons DNA Program finally became a reality.

THE CANADIAN PRESS/Adrian Wyld

3-2a Genetic Variation If you have siblings, you are aware that having the same biological parents does not guarantee similar appearance, personality, and behaviour. The development of an egg or sperm cell is like shuffling a deck of cards. In both cases, a large number of possible outcomes may occur. When a parent’s cell divides to make an egg or a sperm cell, each resulting cell contains 23 chromosomes, one chromosome from each of the parent’s original 23 chromosome pairs. As a result, a single human can produce eggs or sperm with (8 388 608) combinations of chromosomes. Add this variability to the different possibilities provided by the other parent, and it may seem surprising that we resemble our relatives as much as we do. Given that each parent can pass along more than 8 million combinations of chromosomes, it might be surprising that family resemblance can be so strong, as it is in actors and brothers Liam (left) and Chris (right) Hemsworth.

MARCOCCHI GIULIO/SIPA/Newscom/Sipa Press/BEVERLY HILLS CA USA

3-2b Relatedness Despite this potential variability, we remain similar to our genetic relatives. This important point will be revisited later in the chapter, when the evolution of social behaviour is discussed. Relatedness is defined as the probability that two people share copies of the same allele from a common ancestor. If we go back in history far enough, we all share common ancestors. Relatedness, however, is usually computed within a limited number of generations. The chance that you share an allele with one of your parents is one-half, as is the chance that you share an allele with a sibling. First cousins have a one-eighth likelihood of sharing an allele (see Figure 3.4). These types of calculations led geneticist J. B. S. Haldane to allegedly proclaim, “I would lay down my life for two brothers or eight cousins!” (Bynum & Porter, 2005, p. 261). Haldane was computing the likelihood that his genes would be passed down to future generations. As discussed later in the chapter, evolutionary psychologists suggest that sacrificing yourself for others is more likely when the others are genetically related relatives. Figure 3.4 Relatedness. Relatedness refers to the probability that two people share a particular allele from a common ancestor. The chance that you share an allele with one of your parents or a brother or sister is 0.50, or one-half. The chance that you share an allele with a niece or nephew is 0.25, or one-quarter.

© Cengage Learning

3-2c Sex Chromosomes Of the 23 pairs of human chromosomes from each parent, 22 pairs are perfectly matched. In other words, a gene appearing on one of a pair of chromosomes (perhaps a gene for blood type) has a corresponding gene on its partner. In contrast, the X and Y chromosomes do not carry the same genes. The much-larger X chromosome contains about 2000 active genes, while the Y has fewer than 100. Most females carry two copies of the X chromosome, whereas most males carry one X and one Y chromosome. However, individuals may also be born with a single sex chromosome (either X or Y) or three sex chromosomes (various combinations of X and Y). It is also possible for individuals to have two sex chromosomes but develop as the opposite sex. For example, in extremely rare cases, genotypically XX individuals will develop into phenotypical males (known as XX males). This is typically caused by the sex-determining region Y (SRY) gene, located on the tip of the Y chromosome, becoming translocated onto an X chromosome, leading an XX embryo to develop as a male. Intersex is an umbrella term used to describe individuals who are born with sex characteristics (including genitals, gonads, and chromosome patterns) that do not fit typical binary notions of male or female bodies (UN Office of the High Commissioner for Human Rights, 2015). The allele responsible for hemophilia, a disease characterized by the failure of blood to clot, is found only on the X chromosome. This allele is recessive, leading to different outcomes based on the sex of the child receiving the alleles. If a female receives a healthy allele on the X chromosome from one parent and a disease-causing allele on the X chromosome from her other parent, she will be a carrier for the condition but not experience it. In contrast, a male receiving a disease-causing allele on the X chromosome from his mother will have the condition. Because there is no equivalent allele on the Y chromosome to offset the disease-causing recessive allele, the disease-causing allele will be expressed. As a result, conditions such as hemophilia are more frequent among males and are called sex-linked characteristics (see Figure 3.5). As illustrated in Figure 3.6, the family of Queen Victoria of Great Britain (1819–1901) spread hemophilia to a number of European royal houses. Figure 3.5 Hemophilia Is a Sex-Linked Trait. If a daughter inherits her mother’s X chromosome containing the allele for hemophilia , she will be a carrier but will not have the disease. If a son inherits this chromosome, he will have the disease. Unlike his sister, he does not have a healthy X chromosome to offset the disease allele.

Biophoto Associates/Science Source; © Cengage Learning Figure 3.6 Hemophilia and European Royalty. Queen Victoria of Great Britain (1819–1901) had nine children. One son (Prince Leopold, Duke of Albany) had hemophilia, and two daughters (Princess Alice and Princess Beatrice) were carriers for the condition. As a result of their marriages to other European royalty, the three children spread the hemophilia gene to the royal families of Germany, Russia, and Spain. Therefore, hemophilia was once popularly called “the royal disease.”

Enlarge Image

Photos, left to right: Pictorial Press Ltd/Alamy Stock Photo; Mary Evans Picture Library/CHARLOTTE ZEEPVAT; Mary Evans/The Image Works; Mary Evans Picture Library/CHARLOTTE ZEEPVAT; Background photo: martan/ Shutterstock.com; © Cengage Learning Even when genes are duplicated on the X and Y chromosomes, they can perform very differently depending on their location. Forensic experts determine the sex of the source of a genetic sample by observing the amelogenin gene, which contributes to the development of tooth enamel (Akane, 1998; Masuyama, Shojo, Nakanishi, Inokuchi, & Adachi, 2017). The size of the amelogenin gene on the X chromosome is different than on the Y chromosome. Differences between genes for immune system function located on the sex chromosomes might explain the higher risks associated with organ transplants in which the gender of the donor and recipient do not match (Ge, Huang, Yuan, Zhou, & Gong, 2012).

3-3 Which Fields of Genetics Are Relevant to Psychology? Our species shares quite a few genes with chimpanzees, mice, fruit flies, yeast, and a weed known as thale cress (see Figure 3.7). At the same time, humans have genes that definitely set them apart from other animals (and plants). For example, research points to differences between humans and chimpanzees in a single gene, FoxP2, which appears to have had a significant effect on distinctly human behaviours, including spoken language (Konopka & Roberts, 2016). Figure 3.7 Genes Shared with Other Species. Humans share quite a few genes with other species, such as the 18 percent of genes that we share with a weed known as thale cress. However, geneticists are most interested in the genes that differ from those of other species, such as the FoxP2 gene, which appears to be responsible for spoken language. Mutations of this gene cause severe speech and language disorders.

WILDLIFE GmbH/Alamy Stock Photo; © Cengage Learning Table 3.1 compares several subfields of genetics that are relevant to our understanding of behaviour and mental processes. Behavioural geneticists attempt to discover the strength of genetic influences on a particular behaviour. Molecular geneticists look for candidate genes, or genes that have a greater impact on a trait of interest than other genes. Functional geneticists study the entire genome, looking for whole patterns of genetic differences linked to a given trait. Finally, geneticists studying gene–environment interactions look for situations in which candidate genes appear to have different effects. Table 3.1 Subfields of Genetics Branch of Genetics Topic Example Behavioural genetics Amount of heritability Variations in loneliness across the population appear to be strongly influenced by genetics. Molecular genetics Candidate genes Certain genes seem to impact loneliness more than others. Functional genomics Links between the global genome and particular traits Genes might be expressed differently in lonely and nonlonely individuals. Gene–environment Interactions Candidate genes have different effects in different situations

3-3a Behavioural genetics investigates the strength of genetic influences on a particular behaviour. Heritability is the statistical likelihood that variations observed across individuals in a population are due to genetics. If genes play no part in producing phenotypical differences among individuals, heritability is zero. For example, genes are responsible for human hearts, but there is no individual variation in the population in terms of the presence of a heart—each of us has one. Consequently, the heritability of having a heart is zero. If genes are totally responsible for all phenotypical differences among individuals, heritability is 1.0. For example, all variation in the population in terms of having or not having a fatal neurological condition known as Huntington’s disease is entirely because of genetics. If you inherit a Huntington’s gene from one parent, you will develop the condition. Heritability of most human traits is typically in the range of 0.30 to 0.60. Heritability is a concept that is frequently misunderstood. It always refers to populations, never to individuals. Saying that a trait such as shyness is 40 percent heritable does not say that 40 percent of one individual’s shyness is produced by genes and the other 60 percent by their environment. Instead, this suggests that the variations in shyness that we see across the population (some people are shy and others are not) are the result of both genetic and environmental factors (see Figure 3.8). Figure 3.8 Heritability Across Various Functional Domains. Heritability rates tell us how much of the variability seen in a population can be because of genetics. According to a meta-analysis of all twin studies conducted over a period of 50 years, heritability estimates cluster strongly within functional domains. Based on these, we can say that genes have a greater influence on ophthalmological conditions (e.g., diseases of the eye) than on social values (e.g., attitudes).

Source: Adapted from P. McGuffin, B. Riley, & R. Plomin (2001). “Toward Behavioral Genomics,” Science, 291(5507), 1232–1249. doi:10.1126/science.1057264; permission conveyed through Copyright Clearance Center, Inc. Heritability cannot be assessed without taking the environment into account, which is another source of potential confusion. If the environment is held constant (i.e., everybody is treated the same way), the heritability of a trait will appear to be high. For example, if you plant seeds in trays with identical nutrients, water, and sunlight, the height of the resulting plants will reflect their genetic differences. In variable environments, heritability is lower. If you plant seeds in trays receiving different amounts of nutrients, water, and sunlight, the height of the resulting plants will appear to be less influenced by genetics. If you studied the heritability of human intelligence in participants living in extremely wealthy or extremely poor circumstances, genetic influences would be exaggerated, just as they are when you hold the nutrients, water, and sunlight constant for your plants. Researchers assessing the heritability of human traits attempt to do so within a typical range of environments. Diverse Voices in Psychology Genetic Research in Indigenous Communities Tests of Genetic Ancestry have become increasingly commonplace, with many private companies (e.g., 23andMe, Ancestry.com) now offering DNA testing services at relatively affordable rates. However, you may be surprised to learn that the genetic databases these companies rely on to conduct their analyses do not include Indigenous peoples from the United States or Canada. When U.S. senator and presidential candidate Elizabeth Warren had her DNA tested to examine her claim of Native American ancestry, the genetic testing service she used had to rely on Indigenous samples from Mexico and South America, an issue that was largely ignored by the media, although researchers in this area were vocal in their criticisms. This so-called “genomic divide” between people of European versus Indigenous ancestry means that genomic information, and the potential benefits of it, is largely unavailable for Indigenous peoples. Genomic data opens the door to precision medicine, a practice in which genetic testing allows medical treatments to be tailored to individuals based on their genetics. However, many Indigenous groups have been reluctant to participate in genetics research. Focus groups with Indigenous Canadians regarding genetics research have revealed widespread concern with privacy, discrimination, and an overall lack of trust (Morgan et al., 2019). These concerns are not surprising when you consider the unethical manner in which some past genetics research has been conducted. For example, in the 1980s, researchers at the University of British Columbia collected DNA samples from the Nuu-chah-nulth First Nations on Vancouver Island in order to examine a genetic basis for rheumatoid arthritis. After the study was completed, the researchers then allowed the DNA samples to be moved to other research centres and used in unrelated research, all without the permission of the Nuu-chah-nulth. For many Indigenous peoples, practices such as the storage of biological samples in repositories are inconsistent with their cultural beliefs. Fortunately, researchers in Canada and around the world are working on rebuilding trust within Indigenous communities and developing policies that will enable Indigenous peoples to participate in potentially life-saving research, while also protecting their privacy and respecting their cultural beliefs (e.g., Taniguchi, Taualii, & Maddock, 2012). For example, the Maori people of New Zealand view tissue samples, DNA, and any associated data as taonga (“precious”) and tapu (“sacred”). To engage the Maori in genetics research, researchers developed a culturally sensitive plan that tailored informed consent, control over the procedures and data, and community engagement to existing Maori values (Beaton et al., 2017). Although the model was developed for use with Maori participants, it provides guidance for researchers setting up biorepositories with many types of communities. Using culturally sensitive procedures for handling biosamples collected for genetics research built trust between researchers and Maori communities in New Zealand.

davidf/E+/Getty Images Because of the influence of environment on heritability, some researchers question the use of adoption studies for assessing the relative influences of genetics and environment on development. These studies compare adopted children to their biological and adoptive parents in an effort to assess the relative impact of heritability. Like plants with constant amounts of nutrients, water, and sunlight, adoptive families share many common features as a result of the screening process required before adopting. Consequently, adoptive parents rarely represent as much diversity as the group of biological parents whose children they adopt. If all adoptive families provide a consistent environment, this factor would inflate the apparent heritability of characteristics examined in their adopted children. The environments provided by adoptive parents are more similar to one another than to the environments provided by biological parents. This environmental similarity can exaggerate genetic influences—a result that must be taken into account when investigating heritability using adopted children.

© iStockphoto.com/digitalskillet Comparisons of twins are often used in behavioural genetics to evaluate relative contributions of genetics and environment. The differences between identical twins, who share identical DNA, and fraternal twins, whose DNA shows the same approximately 50 percent of overlap found in any two siblings, are particularly useful because both types of twins share similar environments. The Minnesota Study of Twins Reared Apart (Bouchard, Lykken, McGue, Segal, & Tellegen, 1990) not only compares identical to fraternal twins, but includes pairs of identical twins and fraternal twins adopted at birth and raised in separate homes (see Figure 3.9). One example of such a pair are identical twins Lily MacLeod and Gillian Shaw, who were born in China, separated by circumstance, and adopted by two separate Ontarian families in 2000 when they were 8 months old. Although the adoption agency itself denied that the girls were twins, the adoptive parents immediately recognized the similarity between the two girls and DNA testing confirmed their suspicions. Although they grew up living in towns six hours apart from one another, the MacLeod and Shaw families made a pact from the very beginning that they would raise the girls as sisters and make sure that they were able to spend as much time together as possible. Figure 3.9 Twins Raised Together or Apart Show Comparable Levels of Similarity Across Several Traits. Comparisons of identical twins raised together or in separate adoptive families show that some traits are more similar in twin pairs (fingerprint ridge count) than others (nonreligious social attitudes). However, regardless of the degree of similarity for a trait, living apart or together seems to have had relatively little impact on the degree of similarity within twin pairs. In other words, twins raised apart are just about as similar on each trait as twins raised together.

Source: Adapted from T. Bouchard Jr., D. T. Lykken, M. McGue, N. L. Segal, & A. Tellegen (1990). “Sources of Human Psychological Differences: The Minnesota Study of Twins Reared Apart,” Science, 250, 223–228. Twin studies are also useful in establishing concordance rates, which are statistical probabilities that a trait observed in one person will be seen in another. Concordance rates are especially useful to psychologists interested in psychological disorders because they provide estimates of the heritability of a condition. For example, concordance rates for autism spectrum disorder (see Chapter 14) are high in identical twins (95.2 percent) relative to fraternal twins (4.3 percent; Nordenbæk, Jørgensen, Kyvik, & Bilenberg, 2014). Because both types of twins share a uterine environment and are exposed to similar parenting, this discrepancy makes a strong argument for the importance of genetic influences on autism spectrum disorder. Born in China and adopted by two separate families living in Ontario, identical twins Gillian Shaw and Lily MacLeod have technically been raised apart. However, their parents undertook great effort to ensure that the girls would be able to spend plenty of time together as they grew up.

Enlarge Image

Photo by Carlos Osorio/Toronto Star via Getty Images

3-3b The Search for Candidate Genes Although we can say that all humans share 100 percent of their genes, we do not share 100 percent of our alleles, giving each of us a unique version of the genome. In other words, we all share genes that produce eye colour of some sort, but our different combinations of alleles result in a variety of shades. There are at least 3 million DNA variations in the human genome—enough for us to differ from one another in nearly every gene (Plomin & Spinath, 2004). Analyzing variations of DNA in individuals who do or do not have a particular trait of interest, such as an illness or psychological disorder, can help molecular geneticists pinpoint the causes of problems and suggest preventive strategies. Our species shares genes for eye colour, but combinations of alleles produce many possible shades.

© Richard A. Sturm, IMB, University of Queensland A common misunderstanding is the belief that we can identify “a gene for” a particular behaviour. For example, a recent headline in Time magazine trumpets, “The Genes for Pot Addiction Have Been Identified” (Szalavitz, 2016). It is important to remember that genes encode for proteins, not behaviours. Genes build proteins that construct brains, and brains might or might not become addicted to cannabis. Rather than viewing a gene as causing a complex behaviour, it is more accurate to view genes as contributing to the development and functioning of the nervous system, which in turn generates observable behaviour. Prior to the development of databases made possible by the Human Genome Project and the International HapMap Project, investigating more than a few genes at a time was not feasible. Instead, candidate gene research studies were conducted, in which one gene or a small number of genes were compared between groups of people with and without a condition of interest. This search for candidate genes for a particular phenotype, such as schizophrenia, did not result in accurate or complete findings (Farrell et al., 2015). Rather than testing single genes, contemporary functional geneticists often use genome-wide association studies (GWAS) or whole-genome sequencing (WGS). Emerging technologies now allow researchers to scan complete sets of DNA from many participants, looking for variations associated with a particular phenotype, condition, or disease. When 25 historical candidate genes for schizophrenia were re-evaluated using GWAS, effects for 24 of the 25 genes were not confirmed, and 4 genes that had been missed by the candidate gene approach appear to be quite important (Farrell et al., 2015). As our methods for conducting genetic research improve, our answers become more complete and accurate. Although we have cautioned you against “a gene for” reasoning for complex human behaviours, the examination of candidate genes continues to be a viable approach to understanding the “nature” part of the nature–nurture interaction (see Figure 3.10). In some cases, identification of candidate genes can have profound influences on public policy. One such candidate gene is the MAOA (monoamine oxidase A) gene, which has been implicated in antisocial behaviour. The protein produced by the MAOA gene is an enzyme that affects several important neurochemicals, including dopamine and serotonin, which are discussed further in Chapter 4. Variations in MAOA are classified as low or high activity. The low-activity version has been linked to impulsive antisocial behaviour, leading to its popular reputation as “the warrior gene.” But how valid is this point of view? An early case study of a Dutch family characterized by an unusual zero-activity version of MAOA and a history of extreme aggression encouraged further investigations of the “warrior gene” (Brunner, Nelen, Breakefield, Ropers, & van Oost, 1993). Studies of more typical MAOA alleles, however, demonstrate very small effect sizes. This means that only a small part of the variation in antisocial behaviour across the population can be linked to people’s MAOA alleles. Many other factors must play important roles in determining levels of aggressiveness. Figure 3.10 Gene-Environment Interactions and the “Warrior Gene.” Low- and high-activity versions of the MAOA gene interact with the experience of child maltreatment to predict antisocial behaviour (Caspi et al., 2002). Here, we see that youth with either allele who are not exposed to child maltreatment have a low risk of being convicted for a violent crime. Youth with histories of probable or known severe maltreatment have a higher risk of being convicted of a violent crime if they also possess the low-activity version rather than the high-activity version of the MAOA gene.

© Cengage Learning This has not, however, stopped legal systems in the United States and Europe (though not Canada, as of yet) from considering MAOA status in criminal cases (McSwiggan, Elger, & Appelbaum, 2017). This information has been used to support the idea that defendants had reduced responsibility for aggressive actions because they “couldn’t help” being violent (see the Thinking Scientifically box for a similar discussion). It is also possible that courts of law might administer more stringent punishments, such as longer prison terms, under the belief that the person is relatively untreatable (González-Tapia & Obsuth, 2015). Geneticists and psychologists have a responsibility to communicate the correct interpretation of findings to policymakers in order to avoid improper applications of research results. Thinking Scientifically The Genetics of Sexual Aggression In 2015, Canada’s top general Tom Lawson commented during a nationally televised interview that men were “biologically wired in a certain way” (The National, June 16, 2015), which explained the high number of sexual harassment cases in the military. While this comment led to widespread outrage and he later apologized for his poor choice of words, much confusion continues to swirl around the idea of sexual aggression and biology. When we see headlines such as “Sexual Offending May Be Genetic,” it is important to remember to think scientifically about the meaning of such claims. Canadian researcher Kelly Babchishin has published numerous articles examining topics such as incest, child pornography, and other sexual offences. In one study, Babchishin and her colleagues examined how much of the variability in the commission of sexual crimes could be explained by genetic versus environmental factors. Using data from Swedish men who either had or had not been convicted of a sexual offence, the researchers concluded that genetic factors accounted for approximately 40 percent of the variation in the liability to commit a sexual offence (Langstrom, Babchishin, Fazel, Lichtenstein, & Frisell, 2015). Not surprisingly, studies like this one tend to attract a lot of media attention, and people who see the headlines can sometimes mistake such studies as providing evidence for the idea that sexual aggression is hardwired into our genes. But as we have seen throughout this chapter, nothing could be further from the truth. The same study also found that nonshared environment factors between brothers (e.g., perinatal environments, unique social experiences) accounted for approximately 58 percent of the variation in the liability to commit a sexual offence. While studies like this one are helpful in the sense that they indicate that genetics may play a role in sexual aggression, we always need to keep in mind that our behaviour is never fully explained or determined by our genes alone, but by the complex interplay of myriad factors. When we read that “sexual offending is genetic,” we should never take that to mean that sexual offending is automatic, inevitable, or predetermined.

3-3c Epigenetics Having identical genotypes, as is the case with identical twins, does not guarantee identical phenotypes, or observed characteristics. As we explained in the example of the Agouti gene’s effects on the fur colour and weight of baby mice, different phenotypes can result from the same genotype due to interactions between the organism and its environment. When factors other than the genotype produce changes in a phenotype, we say that an epigenetic change has occurred. Epi is Greek for “over” or “above,” so epigenetics refers to the reversible development of traits by factors that determine how genes perform. The field of epigenetics explores these gene–environment interactions. Epigenetic change influences gene expression, the process by which DNA builds proteins that contribute to features of living cells. Genes can be turned on or off by internal signals (hormones or neurochemicals) or by signals from external sources (diet or toxins). There is an obvious need for epigenetics in development, as the differences between a skin cell and a muscle cell result from turning on the right set of genes and turning off others. Thus, the magnitude of epigenetic change depends on an organism’s age. The fetus experiences the highest rate of epigenetic change, followed by the child and finally, the adult. While epigenetic changes are reversible, many last entire lifetimes. For example, individuals who experienced traumatic life events during childhood were found to have long-term epigenetic changes in the hippocampus, a structure associated with memory and responses to stress (Abdolmaleky, Zhou, & Thiagalingam, 2015). Among the factors known to produce epigenetic change are nutrition, disease-causing organisms, drugs, stress, and environmental toxins. In particular, malnutrition and stress experienced by pregnant women have the potential to influence the epigenetics of the fetus, leading to lifelong effects on physical and psychological well-being. In the discussion of psychological disorders in Chapter 14, you will see that many disorders trace their roots to a combination of genetic vulnerability and disruptions experienced by the pregnant woman, such as illness or malnourishment. Fraga et al. (2005) studied 160 identical twin pairs between the ages of 3 and 74. The two chromosomes on the left belong to a pair of 3-year-old twins, and those on the right belong to a pair of 50-year-old twins. Areas of red indicate differences between the two chromosomes in each pair related to differences in gene expression. As twins age, their gene expression becomes more different, as indicated by the greater amount of red in the chromosomes from the older twins. Twins who had spent the most time apart showed the greatest epigenetic differences.

“Epigenetic differences arise during the lifetime of monozygotic twins,” by Mario Fraga et al., in Proceedings of the National Academy of Sciences 2005 Jul 102 (30) 10407–8, Fig. 3. Copyright (2005) National Academy of Sciences, U.S.A. Geneticists have identified four processes that produce lasting but reversible changes in gene expression: ribonucleic acid (RNA) interference, RNA editing, histone modification, and DNA methylation (see Figure 3.11). For the purposes of this overview, we will focus on histone modification and DNA methylation. Histones are protein structures around which your DNA is wound. If the DNA in a single cell were not wound up in this fashion, it would be nearly 2 metres long (Annunziato, 2008). When either the core or the tail of a histone interacts with regulatory proteins, the expression of nearby segments of DNA can become more or less likely. DNA methylation occurs when a methyl group (one carbon atom bonded to three hydrogen atoms) is added to the DNA molecule. This has the result of turning genes off. You can think about DNA methylation as being similar to stapling some pages in a book together. Because of the staples, you can’t read the pages. Figure 3.11 Mechanisms of Epigenetic Change. Two processes for producing epigenetic change are histone modification and DNA methylation. Histone modification occurs when certain chemicals interact with the tail or core of a histone. DNA methylation occurs when a methyl group (one carbon atom bonded to three hydrogen atoms) attaches to the DNA molecule. These modifications affect the likelihood that particular genes will be expressed or silenced.

© Cengage Learning Although many epigenetic studies examine physical features like fur colour, more complex features of interest to psychologists are also subject to epigenetic influences. For example, rats that were licked frequently during infancy by their mothers (the rat equivalent of getting a hug from mom) were calmer when faced with stress later in life than rats licked infrequently (Champagne, Francis, Mar, & Meaney, 2003). By licking their pups, these mothers influenced the expression of genes that determined responses to a stress hormone (Bedrosian, Quayle, Novaresi, & Gage, 2018). The nurture provided by the mother had a lifelong impact on the offspring’s ability to cope with stress. Children exposed to child abuse have been found to have similar long-lasting changes in the expression of genes related to stress hormones (Neigh, Gillespie, & Nemeroff, 2009), as well as a number of genes associated with later medical problems and psychological disorders (Yang et al., 2013). Happily, these changes appear to be reversible in children who experience consistent and responsive caregiving by foster parents (Fisher, Van Ryzin, & Gunnar, 2011). A human being has about 100 trillion metres of DNA. To put this in perspective, this means that our DNA could go to the Sun and back more than 300 times. Epigenetics has the potential to illuminate the causes of many psychological disorders, as discussed in Chapter 14. Hundreds of separate genes appear to be linked to disorders like schizophrenia, autism spectrum disorder, and bipolar disorder, yet no single gene produces more than a tiny effect on a person’s risk of developing these disorders. In contrast, patterns of DNA methylation and unusual histone modifications are strongly associated with risk for these conditions. For example, epigenetic differences help to distinguish between identical twin pairs in which one twin has schizophrenia or bipolar disorder while the other remains healthy (Abdolmaleky et al., 2015). Connecting to Research The Lasting Effects of Early Experiences: The Role of Epigenetics An increasing amount of research is being conducted to examine the role that epigenetic processes play in determining how early life experiences leave their mark on adulthood (e.g., McGowan et al., 2009). Some research even indicates that these changes can be passed down to future generations (e.g., Dias & Ressler, 2014; Yehuda et al., 2014). Here, we take a closer look at a landmark study conducted by Michael Meaney’s research group at McGill University, which examined the epigenetic mechanisms underlying the connection between the maternal behaviour of rats and the stress response of their offspring. Previous work had established that rat mothers engage in distinct types of maternal behaviour: Some rat mothers spend a lot time licking and grooming (LG) their pups during the first week of life (high-LG mothers) and others spend very little time licking and grooming their pups (low-LG mothers). The offspring of these mothers subsequently demonstrate reliably different stress responses: The offspring of high-LG mothers demonstrate reduced behavioural and physiological fear responses when placed in stressful situations, compared with offspring of low-LG. By placing the genetic offspring of high-LG mothers in the care of low-LG mothers (and vice versa), the researchers had previously established that the differential fear responses of the rat pups raised by each type of mother could not be explained by genetics. Rats who were the biological offspring of low LG mothers, but raised by a high LG mother, demonstrated a reduced stress response similar to the normal offspring of high-LG mothers, and grew up to become high-LG mothers themselves (Francis, Diorio, Liu, & Meaney, 1999). The big unanswered question was “How?” The Question: What are the mechanisms by which these maternal effects are sustained over the life span of the animal? Methods Before birth (embryonic day 20), researchers examined the entire promoter region of the glucocorticoid receptor (GR) gene and found that for all pups, the region was unmethylated. After birth, some of the rat pups remained with their biological (high-LG or low-LG) mothers, whereas other rat pups were cross-fostered, meaning that the biological offspring of high-LG mothers were transferred to the care of low-LG mothers and vice versa. Methylation status was examined through a procedure called sodium bisulphate mapping one day after birth and again at the end of the first week of life (the time when rat mothers are known to engage in these different licking and grooming behaviours). In order to examine whether differences in methylation persist into adulthood, the rats were again examined at 90 days of age. The rats’ physiological response to stress (i.e., the release of stress hormones, as we will discuss in Chapter 16) was examined by taking blood samples from the rats immediately before, during, and after a restraint stress procedure, during which the rats are removed from their cages and placed in small plexiglass restrainers (which prevent the rat from being able to turn around) for a 20-minute period. The hippocampi of the adult rats were also dissected and examined for concentration of GR. Ethics As you learned in Chapter 2, all animal research in psychology is regulated by a set of strict ethical guidelines. The procedures undertaken in this study were performed according to the guidelines developed by the Canadian Council on Animal Care and the protocol approved by the McGill University Animal Care Committee. Results Methylation of the critical region of the GR gene in the hippocampus appeared to begin one day after birth, and at this point was the same across all of the rat pups. After the critical first week of life being raised with either a high-LG or low-LG mother, dramatic differences in methylation status occurred, with the critical region of the GR gene effectively demethylated for the high-LG pups, but not for the low-LG pups. This difference remained into adulthood (when the rats were 90 days old). Rats who had been raised by high-LG mothers developed into adult rats who responded more calmly to stressful situations, due to greater expression of the glucocorticoid receptor gene (see Figure 3.12). Figure 3.12 Pathways of Epigenetic Change. According to Weaver et al. (2004), the behaviour of the high-LG mothers triggers permanent changes in the behaviour and physiology of rat pups via two pathways. In the first pathway, the high-LG behaviour increases serotonin activity in the hippocampus, which then leads to increased expression of the transcription factor NGF1-A, which binds to the promoter of the glucocorticoid receptor (GR) gene to increase its transcription and expression. In the second, the first exon (coding region of a gene that contains information to encode a protein) of the GR gene in the hippocampus is demethylated. Demethylated increases the likelihood that a gene will be expressed. The histones surrounding the GR gene also become more acetylated, decreasing their binding to DNA, proving another mechanism by which NGF1-A would have greater access to the GR gene. These epigenetic processes result in permanent behavioural and physiological changes in the pups.

Source: From Sapolsky, R. (2004). Mothering style and methylation, Nature Neuroscience, 7, 791–792; permission conveyed via Copyright Clearance Center, Inc. Conclusions Described as a “tour de force” study upon its publication (Sapolsky, 2004), this study demonstrated that an epigenomic state of a gene can be established through maternal behaviour during early life through DNA methylation and other processes. Follow-up research by Meaney’s research group has revealed that these DNA methylation differences are not restricted to a single gene, but are in fact widespread in the hippocampus (McGowan et al., 2011). Other studies have moved beyond rats to investigate how early childhood experiences may lead to similar epigenetic effects in humans (e.g., McGowan et al., 2009). Psychology as a Hub Science Understanding the Epigenetic Influences of Nutrition We typically think of Nutritionists as providing good advice about what to eat, especially for pregnant women, children buying school lunches, and people facing health challenges. But we also understand that the food we eat is one of the key environmental factors that produce epigenetic change. A new field known as nutrigenomics has emerged to explore the epigenetic influences of diet. In the not-too-distant future, you might be given an individualized diet plan by a nutritionist based on your personal methylation pattern. Although this might initially help us with certain cancers and other disease states, we know that some psychological disorders are also influenced by epigenetics. It is possible that in addition to more conventional treatments, individuals with psychological disorders might benefit from diets tailored to their epigenetic histories.

Adapted from “Mapping the Backbone of Science,” by K. W. Boyack et al., 2005, Scientometrics, 64(3), 351–374. With kind permission from Springer Science+Business Media. Many common foods have known epigenetic effects. Intake of garlic, broccoli, and dietary fibre can turn on anti-cancer and other protective genes. We saw earlier in this chapter how mother mice exposed to BPA gave birth to offspring that had yellow fur and were obese due to reduced methylation of the Agouti gene. However, if the mother exposed to BPA was also given a diet rich in the nutrients choline, betaine, and vitamin B12, all of which contribute to increased methylation, the offspring had normal weight and brown fur. We might not be able to control the environmental toxins to which we are exposed, but a greater understanding of epigenetics might help balance our exposure with appropriate nutrition.

gephoto/ Shutterstock.com Summary 3.1 Major Concepts in Genetics Concept Definition Example Genotype

ISM/Phototake An individual’s genetic makeup. A person might have an allele for freckles and an allele for no freckles. Phenotype

© Richard A. Sturm, IMB, University of Queensland An individual’s obse

3-4 How Does Evolution Occur? The human genome is the product of millions of years of evolution, defined by biologists as descent with modification from a common ancestor. The study of evolution allows us to trace the family tree of living things. In his book The Origin of Species, Charles Darwin proposed that species evolve or change from one form to the next in an orderly manner (Darwin, 1859). Darwin was well aware of the procedures used by farmers to develop animals and plants with desirable traits by mating particular individuals. A farmer’s goal to raise the strongest oxen for pulling a plow might be accomplished by breeding the strongest available oxen to each other. In these cases, the farmer is using artificial selection to determine which individuals have the opportunity to produce offspring. Darwin suggested that instead of a farmer making these decisions, the pressures of survival and reproduction in the wild would make the choice, a process that he called natural selection. Organisms that survive long enough to reproduce would pass their traits to the next generation. Organisms that did not reproduce would not have the opportunity to pass their traits to future generations. As geneticists often remind us, we have no infertile ancestors. Charles Darwin’s theory of evolution described how species change in an orderly manner.

DEA/C. BEVILACQUA/Getty Images In the more than 150 years since The Origin of Species was first published, our understanding of genetics and the fossil record has expanded exponentially, lending substantial further support for Darwin’s views. Surprisingly, Darwin was able to derive his theory without the benefit of a basic understanding of genetics. He was unable to account for the variations he observed in a particular trait. That understanding was provided by Gregor Mendel (1822–1884), who discovered ways to predict the inheritance of particular traits, like the colour of flowers, in his research on pea plants (Mendel, 1866). Mendel, in turn, was working without our modern understanding of genes. Combining current understanding of genetics with the natural selection processes proposed by Darwin provides scientists with powerful hypotheses about the progression of species over time.

3-4a Mechanisms of Evolution In addition to the process of natural selection described by Darwin, evolution can result from mutation, migration, and genetic drift. A mutation is an error that occurs when DNA is replicated. The average human baby is born with about 130 new mutations, but most have no effect (Zimmer, 2009). Mutant alleles providing some advantage spread through the population, but most mutant alleles that result in a disadvantage disappear from future generations. Migration occurs when organisms move from one geographical location to the next. Moving to a new location can affect the survival of individuals and the frequency of certain alleles in the population. Phenotypical traits that are advantageous in one environment might be less so in another. Genetic drift produces change from one generation to the next through chance or accident. Type B blood is virtually absent in contemporary populations of Indigenous North American peoples, most likely due to chance (Halverson & Bolnick, 2008). The group of ancestors who first made their way to the Western hemisphere did not include anyone with the Type B allele. If the ancestors’ blood type alleles had been more representative of the entire human population, more of their descendants would have Type B blood. Darwin understood that breeders could influence the traits of offspring by mating particular individuals. He believed that natural selection operated according to the same principles. The pressures of survival and reproduction in the wild would take the role of the breeder—determining which traits are passed to the next generation.

TIMOTHY CLARY/AFP/Getty Images/Newscom We can explore the effects of evolutionary processes—mutation, migration, genetic drift, and natural selection—on the history of one allele: the recessive allele for blond hair. The original appearance of the allele for blond hair was probably the result of a random mutation occurring in northern Europe some 10 000 years ago (Frost, 2006). Migration, or rather the lack of it, might account for the relatively restricted area in northern Europe populated by blonds until fairly recently. Genetic drift undoubtedly reduced the global frequency of the blond allele between 1300 and 1700, as waves of bubonic plague decimated the European population. If by chance every person carrying the blond allele had died from the plague before reproducing, the allele would have disappeared from the human genome (see Figure 3.13). Figure 3.13 Spread of the Blond Allele. After the first appearance of the blond hair allele about 10 000 years ago as a result of a chance mutation, its frequency might have been affected by migration, genetic drift, and natural selection.

© Cengage Learning Has natural selection influenced the blond gene? Some scientists believe it has. When people have a choice of mates of equal value, they will select the one that stands out from the crowd (Frost, 2006; Bem, 2001). Therefore, individuals with relatively rare blond hair might have enjoyed more reproductive success than those with more common, darker hair colours (Field et al., 2016). In Germany, with its high percentage of blonds, the trait of blondness is viewed differently than in countries where blonds are relatively rare. Any consideration of evolution must include the question of what natural selection selects. Natural selection favours the organism with the highest degree of fitness, defined as the ability of one genotype to reproduce relative to other genotypes. The concept of fitness includes survival to adulthood, ability to find a mate, and reproduction. Fitness is not some static characteristic, such as being strongest or fastest. Instead, fitness describes the interaction between characteristics and the environment in which they exist. A genotype that succeeds during the Ice Age may be at a significant disadvantage during periods of warmer temperatures. Animals in cold climates have short legs and stocky bodies, which help retain heat, whereas animals in warm climates have long legs and slim bodies, which release heat. Again, we see the need to consider nature within the context of nurture. Gregor Mendel (1822–1884) made important discoveries about inheritance at about the same time that Darwin was working on his theory of evolution, but neither scientist knew about genes and chromosomes. Modern geneticists combine this knowledge to form powerful hypotheses about the nature of living things.

/AKG Images

3-4b Adaptation Adaptation refers to either the process or the result of change because of natural selection. In other words, a species can respond to an environmental change by adapting, and features of the new phenotype may be called adaptations. Adaptations can take many forms. They can be behaviours, such as jumping higher to better avoid a predator, or anatomical features, such as eyes that can see colour. Adaptations do not necessarily produce perfection. Any adaptation that is good enough to contribute to the fitness of an organism will carry forward into future generations. Kim Kardashian West has dyed her hair blond numerous times over recent years, explaining that it is her husband, Kanye West’s, favourite hair colour. In many parts of the world, particularly regions where blond hair is rare, it seems that Kanye is not alone in his preference.

Photo by Pascal Le Segretain/Getty Images A classic example of rapid adaptation is the case of the English peppered moth (Biston betularia). Before the Industrial Revolution, most peppered moths found in Britain were light grey, which allowed them to hide against the similar colours of tree bark. Darker moths occasionally appeared beginning around 1848, but because they were less capable of hiding from predators, they made up only about 1 percent of the population. With increasing industrialization, tree bark became frequently coated in soot. The once-camouflaged light gray moths became an easy target for predators against the darker background of the sooty trees. The darker moths rapidly became the norm, reaching frequencies of about 98 percent. As pollution came under better control, tree bark returned to its original light grey, and the lighter moths again became the norm. The peppered moth population successfully adapted to changing environmental circumstances, with colour playing the role of an adaptation. The moths did not “decide” to change colour. Natural selection, in the form of greater rates of reproductive success on the part of moths with a particular colour, changed the frequencies of colour alleles within the population. Adaptations often appear to be compromises between costs and benefits. Adult human males have about ten times as much testosterone as adult human females. Testosterone conveys a reproductive advantage because men with higher testosterone report having more sex partners and an earlier age at intercourse (Lassek & Gaulin, 2009). Higher endogenous testosterone levels are also associated with increased sperm production. On the negative side, however, high testosterone levels lead to lower immune system functioning, making the high-testosterone males more vulnerable to disease (Muehlenbein & Bribiescas, 2005). Adaptation is only one source of evolutionary change. Random events, such as the collision of meteors with the Earth and resulting climate changes, are believed to have destroyed some types of life and provided broad opportunities for others. We are also limited in our ability to use adaptation to predict the future. The study of evolution is similar to the study of history. Although we gain insight into wars by studying the causes of World War II, we cannot use our knowledge to predict future wars with precision. We can be fairly certain that antibiotic-resistant bacteria, climate change, pollution, and reproductive technologies are probably changing the face of the human population as this text goes to press, but where all these changes will lead us remains unknown. Fitness varies across environments. Characteristics like long ears and long legs work in hot, desert climates, but short ears and legs conserve heat in colder climates.

Enlarge Image

Corbis; Jeff Vanuga/Spirit/Corbis; Peter Burian/Spirit/Corbis The dark and light colouring of the peppered moth population in Great Britain changed in response to pollution from soot that collected on trees and changed again when pollution controls reduced the soot.

Perennou Nuridsany/Science Source; Michael Willmer Forbes Tweedie/Science Source

3-4c Evolution of the Human Brain Our interest as psychologists is in the mind, and the mind and the behaviour that it produces originate in the structures and processes of the brain and nervous system. Nervous systems are fairly recent innovations that separate animals from plants (see Figure 3.14). We can place the origin of the Earth at 4.5 billion years ago and the first single-celled life forms 1 billion years later, but the first neural nets appeared only 700 million years ago. In these primitive animals, the nerves in the abdomen were as likely to be important to behaviour as the ones in the head. True brains residing in heads did not appear until animals formed skeletons, around 500 million years ago. The first decidedly human brain made its appearance only 7 million years ago, a small blip in the timeline of evolution (Calvin, 2004). The current model of the human brain has only been available for the last 100 000–200 000 years. Figure 3.14 Timeline of Brain Evolution. If the 4.5-billion-year history of the Earth were expressed as a 24-hour day:

  • The first single-celled organisms would have emerged 18 hours ago.
  • The first nervous systems would have emerged about 3 hours and 45 minutes ago.
  • The first true brain would have emerged about 2 hours and 40 minutes ago.
  • The first hominin brain would have emerged less than 2.5 minutes ago.
  • The current version of the human brain would have emerged less than 3 seconds ago.

Enlarge Image

Photos, left to right: Sebastian Kaulitzki/ Shutterstock.com; pan demin/ Shutterstock.com; blickwinkel/Alamy Stock Photo; imageBROKER/Alamy Stock Photo; The Natural History Museum/Alamy Stock Photo; © Cengage Learning Anthropologists use the term hominin to describe species that walked on two feet, had large brains, and are assumed to be related to modern humans. Over the 7 million years of hominin evolution, brains grew rapidly, suggesting that improved intelligence was quickly translated into substantial advantages in survival. Early tool-using hominins, the australopithecines, had brains that were about the same size as those of modern chimpanzees, or about . Homo erectus, a hominin living about 1.5 million years ago, had a brain of about , and the brains of modern humans, or Homo sapiens, are about . As will be discussed in Chapter 4, the most unique area of the human brain is the cerebral cortex, the outermost layer of the brain. Compared with other primates, the human brain has a much larger cerebral cortex. Notably, the expansion of the cortex has not been uniform. Certain regions of the cortex (known as “hotspots”) are believed to have expanded to a disproportionate extent during evolution. Research indicates that these high-expanding regions of the cortex act as communication hubs and play a central role in supramodal cognition, which is the ability to integrate information from across the brain in a flexible, task-dependent manner (Sneve et al., 2018). It has long been known that the taking of exogenous testosterone reduces the production of endogenous testosterone, which decreases sperm count in men. Given this, researchers have been examining the use of testosterone therapy as a potential male hormonal contraceptive for decades (Page & Amory, 2018). Promising results from recent clinical trials indicate that men may soon have the option of taking a daily oral male contraceptive (Thirumalai et al., 2019).

Djomas/ Shutterstock.com Hominins were not the only creatures who evolved large brains and considerable intelligence. The other primates, elephants, and whales are not lacking in these areas. Although the challenges of finding food, avoiding predators, and navigating through territories require considerable intelligence, these ecological challenges are no match for the complexity of social life faced by the hominins. The major factor distinguishing human intelligence from intelligence of other species is the richness and complexity of the social behaviour supported by the human brain. Managing the abilities to distinguish friend and foe, imitate the behaviour of others, use language to communicate, recognize and anticipate the emotions, thoughts, and behaviour of others, maintain relationships, and cooperate required the evolution of a special brain (Cacioppo et al., 2002; Hrdy, 2005; Roth & Dicke, 2005). Comparisons of the challenges faced by different species support a stronger role for social complexity than for ecological complexity in building bigger brains (Cacioppo & Decety, 2011a; Dunbar & Schultz, 2007).

3-4d The Contemporary Human Brain You surf the Internet, complete your calculus homework, and read this textbook with a brain that is essentially the same size as that of your early Homo sapiens ancestors. Although we can understand the advantages to survival of big, intelligent brains, we do not know why advances such as agriculture, literacy, and urbanization have not been accompanied by additional increases in brain size. It is possible that we have reached equilibrium between our needs for intelligence and the costs of a big brain. Brains are expensive to run in terms of nutrients. Although the brain comprises only about 2 percent of the body’s weight, it requires at least 15 percent of the body’s resources. In addition, brain size may be limited by the dimensions of the birth canal. Further change may not occur unless we experience a drop in the costs of big brains, a change in nutrients, or additional pressures for greater intelligence. Over the course of hominin evolution, the size of the brain increased dramatically. It is possible that the ability of bigger brains to manage social relationships could have driven this rapid change.

E. R. Degginger/Science Source Although brain size has changed little during Homo sapiens’s time on the Earth, the evolution of the human brain has not ended, and average intelligence has not remained the same. Modern genetic techniques allow researchers to date changes in a particular gene. Genes involved with brain development appear to have changed as recently as 6000 years ago (Evans et al., 2004, 2005). As discussed in Chapter 10, IQ test scores have increased dramatically worldwide over the last 100 years (Flynn, 1999). If brain size has not changed for 100 000 years, let alone during the last 100 years, how can we account for the observed increase in intellectual performance? It is likely that environmental factors, including nutrition and education, might explain the improvement. It is somewhat surprising that humans figured out how to go to the Moon and back with brains that were the same size as those of Homo sapiens living 100 000 years ago.

Archive Image/Alamy Stock Photo Summary 3.2 Principles of Evolution Concept Definition Example Natural selection

TIMOTHY CLARY/AFP/Getty Images/Newscom A trait’s frequency in a population is determined by the survival and reproductive success of the organisms with the trait. Faster rabbits are more likely to survive and reproduce than slow rabbits, leading to more fast rabbits in subsequent generations. Mutation

Photo by Pascal Le Segretain/Getty Images Genetic changes that occur spontaneously or because of external fa

3-5 How Does Evolution Influence Behaviour? We mentioned earlier in the chapter that behaviour can be adaptive. If an animal that is good at hiding in the bushes is more successful than others of its species in escaping predators, it is likely that the ability to hide will spread through subsequent generations of the population. Physical features, like the colouring of an animal that allows it to hide, are well-understood types of adaptations. Evolutionary psychologists attempt to explain how behaviours can be adaptive too.

Mitsuaki Iwago/Minden Pictures/Getty Images Behaviour like hiding, however, is unlike the other adaptations discussed so far, such as the colour of a moth or blond hair. Colour is a physical characteristic, and it is a fairly simple matter to identify the genes associated with these phenotypes. Behaviour as a phenotype is considerably more complex. It is not an anatomical structure like a wing or an eye. Can we assume that behaviour is shaped by the same evolutionary forces that affect physical traits? Darwin thought so. In The Descent of Man and Selection in Relation to Sex, he writes: The difference in mind between man and the higher animals, great as it is, certainly is one of degree and not of kind. We have seen that the senses and intuitions, the various emotions and faculties, such as love, memory, attention, curiosity, imitation, reason, etc., of which man boasts, may be found in an incipient, or even sometimes in a well-developed condition in lower animals. (Darwin, 1871, p. 126)

3-5a The Evolutionary Psychology Perspective Among the psychological specialties discussed in Chapter 1, evolutionary psychology, a subspecialty within biological psychology, is the most relevant to our current discussion of the evolution of behaviour. This approach to the mind assumes that our current behaviour exists in its present form because it provided some advantage in survival and reproduction to our ancestors (Cosmides & Tooby, 1997). The evolutionary psychology approach not only owes an obvious debt to Darwin, but also is a direct descendant of the functionalism supported by William James. As the term functionalism implies, behaviour is seen as promoting survival, as opposed to being random and pointless. The goal of evolutionary psychology is to explain how the patterns of behaviour that we share with other humans have been shaped by evolution.

3-5b Origins of Social Behaviour Reconstructing the evolution of the nervous system is difficult, and tracing the origins of individual behaviour is even more challenging, but identifying the roots of social behaviour might be the most difficult task of all. Occasionally, physical evidence has allowed scientists to determine whether dinosaur parents stayed around to look after their young, but such accounts leave much detail unexplored and unexplained. Individuals belonging to social species congregate in a number of ways, from pairs to families to whole societies. Belonging to a social group provides the benefits of mutual protection and assistance. Predatory fish are most likely to hunt in the perimeter of a school of fish because it’s easier to isolate prey there than in the middle of the school (Ioannou, Guttal, & Couzin, 2012). Being on the social perimeter is risky for our species as well. This simple fact of survival might be one of the reasons that we react so emotionally when we believe that we are being socially excluded. Being social carries costs as well as benefits. Social animals face injury in competition for food and mates and are exposed to contagious illnesses (Alexander, 1974), but the benefits of being social have clearly outweighed the costs for many species. Predator fish typically hunt on the perimeter of a fish ball, where it is easier to isolate and capture prey. Humans who are on the social perimeter also may be at risk from predation, but usually from their own kind.

MAURICIO HANDLER/National Geographic Creative In typical environments, individual animals experience a variety of possible interactions when they come into contact with others of their kind. In each case shown in Table 3.2, individuals either benefit or do not benefit from the interaction, ultimately affecting their survival and reproductive success. Both parties benefit equally if they cooperate. For example, two hunters can work together to bring down an animal that neither could successfully hunt alone. Sharing the resulting meat with the families of both hunters would contribute to their survival and reproductive success. Much social behaviour probably originated in these types of situations, in which the benefits of cooperation for an individual’s survival and reproduction outweighed the disadvantages of cooperating. Table 3.2 Outcomes of Social Interactions

And the second organism:
The first organism:
Wins
Loses
Wins
Cooperation
Selfishness
Loses
Altruism
Spite
 Enlarge Table 

© Cengage Learning Although no cultures today exist exactly like the hunters–gatherers of the past, the Ache of Paraguay are often used as a model of how that life might have been. Here, the Ache cooperate with one another to fish. Social behaviours like cooperation might have allowed humans, who are not particularly strong individuals, to survive.

Kim Hill/Arizona State University Cooperation, however, is not the only way that two individuals can interact. In selfish interactions, one person could steal food from another, allowing the thief’s family to survive while the victim’s family starves to death. In spiteful interactions, both participants lose. In some divorce proceedings, the partners are so determined to keep each other from maintaining resources that everything goes to the lawyers. Finally, in altruism, one individual’s self-sacrifice is designed to benefit another individual. Altruism, or the sacrifice of yourself for others, is more common among related individuals, but it also occurs when we are in close social contact with others. Honeybees sting to defend their hive, but in doing so, they end their own lives.

Gherasim Rares/ Shutterstock.com Altruism is widespread in the animal kingdom. Most of us have experienced a honeybee sting, which is suicidal behaviour on the part of the bee in an effort to protect its hive (Wilson, 1975). Altruism can extend to entire social organizations, regardless of the degree of relatedness (Chicago Social Brain Network, 2011). Among emperor penguins (Aptenodytes forsteri), survival of the chicks in the hostile Antarctic cold depends not only on an individual parent, but on the larger huddle formed by other parents. As discussed in greater detail in Chapter 13, altruism is one of the most challenging social behaviours to explain in evolutionary terms. Darwin himself was puzzled by the apparent sacrifice of some individuals that led to the survival of the group. If altruism results in the destruction of the individual with altruistic genes, why doesn’t this behaviour disappear? To explain this phenomenon, we return to the concept of relatedness presented earlier in the chapter. Sacrificing your life to save a close blood relative might increase the likelihood that your alleles would be passed along to subsequent generations. Self-sacrifice in this case does not need to be a conscious decision. Any behaviour that results in a greater frequency of the relevant genes in subsequent generations will become more common. Cooperation allows humans to carry out complex behaviours that would be impossible for a single individual to perform successfully.

stefanolunardi/ Shutterstock.com You might be thinking that you often behave altruistically toward people who are not related to you. You might have comforted a friend who experienced a death in the family instead of studying for an upcoming midterm. This type of behaviour, known as reciprocal altruism, occurs when we help another individual who is likely to return the favour at some future date (Trivers & Burt, 1999).

3-5c Sexual Selection Sexual selection was Darwin’s term for the development of traits that help an individual compete for mates (Darwin, 1871). To what extent is human behaviour influenced by sexual selection? Survival of emperor penguin chicks in the hostile cold of the Antarctic depends not just on the individual parent, but also on the larger huddle formed by the other parents.

3/NaturePL/Superstock Parental Investment Sexual selection is influenced by the different investments in parenting made by males and females (Emlen & Oring, 1977). In many species, including our own, the female bears most costs of reproduction, from the carrying of the developing organism until birth to the nurturance of the young until adulthood. As a result, human females face sharper limitations than human males on the number of children that they can produce in a lifetime. If the goal is to pass your genes to subsequent generations and you are going to produce only one or two children, each child had better be as healthy and well nurtured as possible. The average number of children per woman worldwide dropped dramatically from 4.98 in 1960 to 2.45 in 2014 (The World Bank, 2016; see Figure 3.15). Figure 3.15 The Human Birth Rate Is Dropping Rapidly. The average number of children per woman worldwide dropped dramatically between 1960 and 2014.

© Cengage Learning Source: Adapted from the World Bank ( http://www.worldbank.org/projects). Genghis Khan may have been the most prolific human male in history. His distinctive Y chromosome has been identified in 16 million living men, or 0.5 percent of the world’s current total (Zerjal et al., 2003).

Andrey Burmakin/ Shutterstock.com Because males have a lower investment of time and resources in reproduction compared to females, it might seem that the best reproductive strategy for males would be promiscuity, but this is not usually the case. In species such as our own, with lengthy and complex development leading to adulthood, a male who abandons his offspring puts their survival at risk (Gibson, 2008). Even if a man fathers many children, his genes are less likely to make it into the next generation if most or all perish from lack of care or protection. The mother can maximize her children’s chances of survival by choosing a father who will not only pass along healthy genes, but also participate in the raising of children. Women have the ability to make accurate predictions of a man’s interest in children simply by looking at a photograph of his face (Roney, Hanson, Durante, & Maestripieri, 2006). Men with facial features correlated with high testosterone (strong brow ridge, square chin) are viewed as less likely to participate in childrearing than are men with facial features correlated with lower testosterone. These results suggest that women would be able to determine a man’s potential as a father before reproductive investment occurs. Women show the ability to predict a man’s score on the Infant Interest Questionnaire, which might indicate how involved a father he would be, by detecting the influence of testosterone on his facial features. The face at the top indicates higher testosterone, while the one at the bottom indicates lower testosterone.

Detail from F. Moore et al. Figure 1, “Composite male faces constructed to differ in levels of T and C, from the article “Evidence for the stress-linked immunocompetence handicap hypothesis in human male faces,” Proc. R. Soc. B, March 7, 2011, by permission of the Royal Society. Traits Possibly Influenced by Sexual Selection Earlier in this chapter, we discussed how blond hair might have provided a reproductive advantage because its relative novelty made it an attractive feature. What other types of human traits appear to fit Darwin’s ideas about sexual selection? Sexual selection might occur in two ways. In intrasexual selection (intra means “within”), members of one sex compete with one another for access to the other sex. In some species, such as deer, males engage in fights that determine which males are allowed to mate and which are not. Features like large antlers, which assist in winning a fight, could become sexually selected. In intersexual selection (inter means “between”), characteristics of one sex that attract the other might become sexually selected. The male peacock’s luxurious tail appears to have developed for the sole purpose of attracting mates. Similar to the costs and benefits of adaptations described earlier in the chapter, the evolutionary diversification of secondary sexual traits in males may be strongly shaped by trade-offs with ejaculate production. For example, the quality of the courtship song produced by male crickets is negatively correlated with sperm quality (Simmons, Tinghitella, & Zuk, 2010). However, research on this topic is difficult to conduct and to date has led to inconsistent findings (Simmons, Lupold, & Fitzpatrick, 2017). Evolutionary psychologists have argued that a number of human traits might have been subjected to sexual selection, including humour and vocabulary. According to this argument, human males use humour (Cherkas et al., 2000) and their vocabularies (Rosenberg & Tunney, 2008) to impress females with their intelligence because of intersexual selection. In romantic situations, males use more uncommon words than they do in other situations. We can also tell you about one behaviour that does not successfully attract females—taking unnecessary risks (Wilke, Hutchinson, Todd, & Kruger, 2006). However, risky activities might have indirect positive outcomes for males. Dominant, successful males are likely to attract more females, and dominance among males is often decided on the basis of intrasexual competition in risky endeavours. Experiencing Psychology Measuring Individual Differences in Life History Speed, Mating Effort, and Parenting Effort Life History Theory (LHT) is a theory of biological evolution that examines the diversity of life history strategies used by various species around the globe. While typically used to examine differences between species, some evolutionary psychologists have examined whether LHT can be applied to better understand variations within human life history strategies (e.g., variations in parental investment and risk-taking). Although partially constrained by physiology, human life history strategies are also influenced by environmental (e.g., socioeconomic and cultural) factors (Kruger, 2017). Differential K, or K-factor, is the name given to the common underlying factor that is believed to index general life history speed (from fast to slow). A low K-factor indicates a faster life history speed, represented by a range of strategies including a focus on short-term gains at the expense of long-term benefits, high mating effort, and low parenting effort. According to LHT, such strategies are more likely to be adopted by individuals living in unpredictable, adverse environments (e.g., Copping & Campbell, 2015). A high K-factor indicates a relatively slower life history speed, including strategies that focus on long-term outcomes, selective mating, and high parental effort—strategies believed to be more common in stable, secure environments. The most commonly used measure of life history assessment in psychology is the Arizona Life History Battery-Short Form (ALHB-SF; Figueredo et al., 2006, 2014). While the scale reliably measures a range of attributes believed to reflect life history speed (e.g., risk-taking, social support), it does not measure mating or parental effort. In part because trade-offs between mating and parental effort are believed to be strongly predictive of variations in life history, Daniel Kruger at the University of Michigan recently developed brief self-report measures of mating effort and parental effort. Based on a large sample of undergraduate students, Kruger found that scores on the mating effort and parenting effort scales supported the notion that these are distinct, inversely related constructs, and that each correlated with K-factor in the predicted direction. Each measure was also predictive of a range of other psychological and behavioural measures (e.g., the number of short-term and long-term relationships the person had been in). While further research is needed to fully assess the reliability and validity of these new scales, these brief measures appear to add valuable information to the understanding of human life history variation. Each of the scales, along with scoring instructions, is provided below, if you would like to assess where you fall along the continuum of life history strategies, mating effort, and parenting effort. However, please keep in mind that these scales are meant to assess variation among individuals, and are not meant to be meaningful stand-alone scores. The Mini-K Short Form of the ALHB Please indicate how strongly you agree or disagree with the following statements using the scale below. For any item that does not apply to you, please enter “0.” Disagree Strongly Disagree Somewhat Disagree Slightly Don’t Know/Not Applicable Agree Slightly Agree Somewhat Agree Strongly −3 −2 −1 0 +1 +2 +3 Enlarge Table

  1. I can often tell how things will turn out.
  2. I try to understand how I got into a situation to figure out how to handle it.
  3. I often find the bright side to a bad situation.
  4. I don’t give up until I solve my problems.
  5. I often make plans in advance.
  6. I avoid taking risks.
  7. While growing up, I had a close and warm relationship with my biological mother.
  8. While growing up, I had a close and warm relationship with my biological father.
  9. I have a close and warm relationship with my own children.
  10. I have a close and warm romantic relationship with my sexual partner.
  11. I would rather have one than several sexual relationships at a time.
  12. I have to be closely attached to someone before I am comfortable having sex with them.
  13. I am often in social contact with my blood relatives.
  14. often get emotional support and practical help from my blood relatives.
  15. I often give emotional support and practical help to my blood relatives.
  16. I am often in social contact with my friends.
  17. I often get emotional support and practical help from my friends.
  18. I often give emotional support and practical help to my friends.
  19. I am closely connected to and involved in my community.
  20. I am closely connected to and involved in my religion. To determine your Mini-K score, simply add up your responses. A score near 0 indicates neither a high nor low K-factor. A score closer to 60 indicates a higher K-factor (slower life history speed). A score closer to −60 indicates a lower K-factor (higher life history speed). Source: Figueredo A. J., Vásquez G., Brumbach B. H., Schneider S. M. R., Sefcek J. A., Tal I. R. … Jacobs W. J. (2006). Consilience and life history theory: From genes to brain to reproductive strategy. Developmental Review, 26, 243–275; permission conveyed via Copyright Clearance Center, Inc. Mating Effort Please indicate how strongly you agree or disagree with each of the statements as a description of you and what you would do.
  21. Wear flashy, expensive clothes.
  22. Sleep with a large number of people in your lifetime.
  23. Knowingly hit on someone else’s partner.
  24. Attractive to others for a brief sexual relationship. Disagree Strongly Disagree Somewhat Disagree Slightly Don’t Know/Not Applicable Agree Slightly Agree Somewhat Agree Strongly −3 −2 −1 0 +1 +2 +3 Enlarge Table

Parenting Effort

  1. Good at taking care of children.
  2. Use most of your money to support your family.
  3. Be a loyal and faithful wife/husband.
  4. Caring and emotionally supportive in a long-term relationship. To determine your mating effort and parenting effort scores, simply add up your responses for each scale. Higher numbers indicate higher effort on that scale. Are your scores on each scale very different from one another? Research indicates that these two measures are inversely related to one another (meaning that high scores on one scale predict lower scores on the other scale). Source: Republished with permission of SAGE Publications, Inc. Journals, from D. J. Kruger. (2017). Brief self-report scales assessing life history dimensions of mating and parenting effort. Evolutionary Psychology, 15(1), doi.org/10.1177/1474704916673840; permission conveyed through Copyright Clearance Center, Inc.

3-5d Culture Not only are human societies directly influenced by our biological history, but they also bear the stamp of the inventions of the biological brain in the form of culture. Cultures, which arise from socially transmitted knowledge, provide practices, values, and goals that can be shared by groups of people. Languages, morality, arts, laws, and customs make up a diverse and vibrant part of human social interactions. Experiences shaped by culture, like other types of experiences, interact with survival and reproductive pressures. How might cultural differences have affected our ancestors’ survival? We can gain insight into our species’ cultural history by observing contemporary pre-agricultural societies, such as the Waorani and Yanomamö of the Amazon Basin. These groups are remarkably combative. Fights between villages account for 30 percent of the deaths among Yanomamö males (Chagnon, 1988) and 54 percent of deaths among Waorani males (Beckerman et al., 2009). Cultural traditions in the two groups have led to different patterns of interactions between aggression and reproductive success. Aggressive Yanomamö men produce more children than less aggressive Yanomamö, but less aggressive Waorani men have more surviving offspring than aggressive Waorani (Beckerman et al., 2009). A simple cultural distinction—the Yanomamö practice of standing down for a period of time between raids, which is not a practice shared by the Waorani—appears to account for the differences observed in the impact of aggression on reproductive success. Our social minds were shaped by the cultures of hunter–gatherer groups until the development of agriculture approximately 10 000 years ago. With improved control of the food supply, less geographical mobility, and larger communities, humans entered a new era of social interaction. Although we believe that many features found in modern human behaviour, such as reciprocal altruism, originated in the hunter–gatherer society, further social adjustments were required as groups became larger and more complex. Agriculture, with its emphasis on land ownership, might have been the origin of patriarchal systems, in which men maintain control of resources, and inheritances follow the male line. Unlike hunter–gatherer societies, which are relatively egalitarian so far as the rights of men and women go, agricultural societies tilted the control of food and important resources in favour of men. Early industrialization merely built upon agricultural systems and, if possible, accentuated the power differential between males and females. Contemporary trends are again moving in a more egalitarian direction, with women in developed countries enjoying considerable financial independence and reproductive choice. These changes will no doubt have further effects on our social environment. As societies became larger, humans took advantage of their large brains to devise new cultural systems to maintain group cohesion. Emerging societies shared many of the same types of internal conflict, so we typically find similar moral, religious, and legal systems across diverse cultures that attempt to control marriage, “character” issues such as honesty, and the transfer of precious resources. With all the flaws of our groups, whether we’re looking at families, communities, or nations, we retain a strong need to belong. As we will see on many occasions in this textbook, humans do not thrive in isolation. Our dependence on kinship, friendship, and group membership, honed over the course of 100 000 years or more of social living, continues to influence our behaviour today. The Waorani (left) and Yanomamö (right) of the Amazon Basin share high rates of aggression and yet experience different reproductive outcomes. Reproductive success is higher among the most aggressive Yanomamö and the least aggressive Waorani. This outcome is probably the result of the Yanomamö practice, not shared by the Waorani, of standing down between raids. This gives aggressive Yanomamö men chances to rest, heal, and reproduce that are not available to aggressive Waorani men.

Francois ANCELLET/Getty Images; Wave Royalty Free/Design Pics Inc/Alamy Stock Photo Humans have developed a strong need to belong and often adopt traditions that enhance a sense of group membership.

Photo by Julia Arram/Icon Sportswire via Getty Images Summary 3.3 Evolutionary Influences on Behaviour Concept Definition Example Cooperation

stefanolunardi/ Shutterstock.com Working together to benefit all parties involved. Hunters work together to kill an animal that an individual could not kill alone. Altruism

Gherasim Rares/ Shutterstock.com Sacrificing your p

3-6 ch summary Chapter Summary Throughout this chapter, we have explored numerous ways in which nature and nurture interact to influence the mind and behaviour. Importantly, we have seen that the answer to the question “is it nature or is it nurture?” is always “both!” While we each inherit genes from our biological parents that may predispose us to certain behaviours or characteristics, from the moment of conception onward, our genes begin to interact with the surrounding environment in intricately complex ways that we are just beginning to understand. As advancements in epigenetic technologies are made, we will continue to learn more about how our environmental circumstances, such as sleep (Chapter 5), diet (Chapter 7), and stress (Chapter 16), regulate gene expression. As newspaper headlines continue to exclaim that “scientists have identified the gene for X,” we know to always zoom out and consider the bigger picture. Whether “X” is aggression, empathy, or any other psychological trait or disorder, we know that behaviour is never explained by a single factor. In this chapter, we have also delved more deeply into the functionalist approach to psychology first proposed by William James (Chapter 1). We have explored how patterns of behaviour such as altruism, cooperation, and sexual selection have been shaped by evolution. Understanding that our human brain slowly adapted to respond to challenges faced by our early ancestors thousands of years ago can aid us in understanding why our brains work the way that they do, whether or not it leads to adaptive behaviour by current standards.

key terms Key Terms The Language of Psychological Science Be sure that you can define these terms and use them correctly.

  • adaptation
  • alleles
  • altruism
  • behavioural genetics
  • candidate gene
  • concordance rates
  • dominant
  • epigenetic
  • evolution
  • fitness
  • gene
  • gene expression
  • genetic drift
  • genome-wide association studies (GWAS)
  • genotype
  • heritability
  • heterozygous
  • homozygous
  • migration
  • mutation
  • natural selection
  • nature
  • nurture
  • phenotype
  • recessive
  • reciprocal altruism
  • relatedness
  • sexual selection

ch 4 intro Chapter Introduction Perceived social isolation or connectedness can produce changes in the cells of our immune system.

Enlarge Image

Argosy Publishing, Inc. Learning Objectives

  1. Identify the relevance of brain structures and processes for understanding mind and behaviour.
  2. Differentiate the major branches of the nervous system, explaining the core biological function of each branch.
  3. Describe the process by which neurons communicate with one another, allowing the nervous system to integrate complex information.
  4. Differentiate the roles played by major neurotransmitters in supporting physical functioning and psychological experience.
  5. Associate key structures in, and regions of, the brain and peripheral nervous system with important aspects of physical and psychological functioning.
  6. Explain the process by which hormones influence psychological experience and behaviour. Throughout History, Human Survival has been threatened by the various bacteria and viruses that try to make us their home. The bacteria-driven Black Death decimated Europe between 1346 and 1400, killing an estimated 30 to 60 percent of the population (Austin Alchon, 2003). Smallpox, measles, and influenza carried by Europeans to the Western Hemisphere killed as many as 90 percent of Indigenous populations (Public Broadcasting Service [PBS], 2005). The Spanish flu of 1918, which is related to contemporary bird flu strains, killed between 50 million and 100 million people worldwide in about a year (Patterson & Pyle, 1991). Although you might think surviving a pandemic is more a question of biology and medicine, behaviour and mental processes have a considerable amount of influence on our abilities to fight bacteria and viruses (Cacioppo & Berntson, 2011). Again, zooming out to the human social environment and then zooming back in to a smaller scale gives us a complete and interesting picture. Humans, who lack impressive teeth or claws, formed groups to enhance the odds of their survival. Anyone who was socially excluded from these groups experienced a more hostile environment. Social exclusion not only separated a person from the help of others in life-threatening situations, perhaps in fending off a predator, but worse, could lead to outright conflict with others, including combat. Under such hostile circumstances, socially excluded people faced a greater risk from bacterial infections than from viruses. Bacteria enter the body through cuts and scratches, whereas viruses are transmitted through body fluids (e.g., sneezing), so you are most likely to be exposed to them when you are in close contact with other people.

A. Inden/Cusp/Corbis With that background in mind, look at the group of people in the image on the previous page. Do you think the woman on the left is feeling included or excluded? Surprisingly, whether we typically feel socially isolated or socially connected can have serious implications for our health (Cole et al., 2015). If this woman normally feels isolated and often left to fend for herself, she will, like her excluded ancestors, face a greater threat from bacteria than from viruses. Her brain will respond to her feelings of isolation by generating hormonal signals that will tell her immune system (shown in the larger image at the beginning of the chapter) to gear up to protect her against bacteria. In contrast, if she usually feels socially connected to others, her brain will initiate a cascade of hormonal signals that tell her immune cells to prepare to protect her against viruses. This is just one example of how the mind’s perceptions of the social environment—whether it is friendly or not, for instance—can affect biological processes that are important to health and survival. In Chapter 3, we learned how the challenges of surviving and reproducing in particular physical and social environments could shape a species’ biology and behaviour. In turn, the resulting biological structures and processes of the mind exert profound influences on our physical and social environments. In this chapter, we will provide a foundation for understanding the biological bases of behaviour and mental processes by exploring the structures of the nervous system and the ways that they function. Human brains such as this one, carefully held by one of your textbook’s authors, weigh about 1.5 kilograms (about 3 pounds) and contain approximately 86 billion neurons. That’s about the same number as the stars in our galaxy, the Milky Way.

Courtesy of Dr. Skirmantas Janusonis/University of California, Santa Barbara. Photo © Roger Freberg

4-1 What Is Biological Psychology? Many of us find the concept that our minds are somehow a result of the activity of nerve cells a bit unsettling. How could our feelings, thoughts, and memories be caused by a bunch of cells? Shouldn’t there be more to who we are than something so physical? Such ideas led thinkers like Renaissance philosopher René Descartes to propose a philosophy of dualism, which suggests that our mind is somehow different and separate from our physical being. If you are more comfortable with thinking about mind this way, go ahead, as long as you recognize that the field of biological psychology, and the neurosciences in general, embrace the competing philosophy of monism. According to the monistic approach, the mind is what the brain does. Biological psychology, also known as behavioural neuroscience, is the scientific study of the reciprocal connections between the structure and activity of the nervous system and behaviour and mental processes. Biological changes often influence behaviour and cognition. For example, when your stomach is empty, a gut hormone called ghrelin is released. When ghrelin reaches the brain, you respond by feeling hungry. After you eat, ghrelin release is suppressed and you feel satisfied, so biology (amount of ghrelin released) initiates behaviour and cognitions (feeling hungry and beginning to eat, or feeling full and stopping eating). However, your behaviour and cognitions can also have substantial effects on your biology. When participants were told that the 380-calorie milkshake that they consumed was a “sensible” 140-calorie shake, their ghrelin levels barely changed, whereas consuming the same 380-calorie milkshake was followed by a steep decrease in ghrelin when they were told it was a 620-calorie “indulgent” shake (Crum, Corbin, Brownell, & Salovey, 2011). In other words, the way people thought about the shake (cognitions) had remarkable effects on their biology (amount of ghrelin released). Other research has shown that participants who consumed a high-sugar protein shake subsequently ate more potato chips than participants who consumed a low-sugar protein shake, unless the participants were led to believe that the protein shake was unhealthy (Mandel & Brannon, 2017). We can’t guarantee that praising your healthy salad for its “indulgent” qualities will make it easier to stick to your diet, but experiments like these reinforce the power of thought to influence biology and behaviour. Not only can our biology affect our behaviour, but our behaviour and cognitions can have significant effects on our biology as well. When Alia Crum and her colleagues made participants think they were drinking an indulgent milkshake instead of a sensible milkshake, their levels of the hormone ghrelin were more consistent with feelings of satisfaction.

Enlarge Image

Gibbs Graphics

4-1a Early Attempts to Understand Biological Psychology Advances in the methods we use to observe the structure and function of the nervous system have driven the history of biological psychology. The development of contemporary methods, such as the recording and imaging of brain activity, opened new areas of inquiry to biological psychologists. Before these methods were available, however, most of our knowledge of the nervous system resulted from clinical observations of injured or mentally ill individuals or from autopsy, the examination of bodies after death. When used with other contemporary methods, clinical observation and autopsy are quite accurate, but early thinkers lacking contemporary methods often struggled in their attempts to understand the physical basis of mind. They understood many things correctly while making some notable errors. Aristotle (384–322 bce) mistakenly believed that the heart, not the brain, was the source of mental activity. An interesting historical mistake was phrenology. Toward the end of the 18th century, phrenologists proposed that the pattern of bumps on an individual’s skull correlated with that person’s personality traits and abilities (Simpson, 2005). The brain supposedly worked like a muscle, getting larger through use, which led frequently used areas of the brain to grow so much that the skull above these areas would bulge. Phrenologists “read” a person’s character by locating the bumps on a person’s head and identifying the personality traits below each bump according to a map. None of these ideas were close to being accurate. Although the phrenologists were wrong about the significance of bumps on the skull and the effects of activity on the structure of the brain, they did reach one correct conclusion: Their notion that some behavioural functions are localized to certain areas of the brain is one we share today. Thinking Scientifically When Does Reductionism Work? When Does It Fail? Reductionism in Science is defined as the explanation of complex things as sums of simpler things. Taking a rather extreme reductionist approach, science fiction often features scenes in which an android reminds a human that they’re not so different after all—the brain is just a computer made up of chemicals, nothing more, nothing less. In some ways, all modern science is reductionist. Scientists assume that whether you are studying particle physics or human behaviour, a single set of fundamental laws explains much of what we observe. We do not need new sets of rules for the features of table salt (sodium chloride) in each context in which it appears. Regardless of whether the chemical is participating in neural signalling, flavouring our food, contributing to high blood pressure, or making us float more easily when we swim in the ocean, the fundamental principle is the same: Salt is salt. The scientific search for fundamental principles has been fruitful, but it does have limitations. Although we can learn a lot by breaking apart complex things to study simple things, we saw some risks to this approach in the debates between structuralists and Gestalt psychologists, described in Chapter 1. Fish swim in schools, geese fly in a V formation, ants and bees swarm, cattle form herds, and humans form societies. We could never understand these complex phenomena by studying the behaviour of an individual member of the group. Nobel laureate physicist P. W. Anderson (1972, p. 393) reminded scientists that large collections of simple things do not always behave in the same way that simple things behave in isolation. He wrote that “at each stage (of complexity), entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry.” This chapter on the biological foundations of behaviour and mental processes relies extensively on reductionist thinking. As you work through the chapter, however, it is important to keep Anderson’s cautions in mind. Some aspects of behaviour will continue to be governed by rules that explain the actions of simple things, while others will require the introduction of rules better suited to more complex combinations and interactions of simple things. Viewing a complex concept as a sum of its simpler parts is not always the best way to understand its full meaning.

Enlarge Image

fabio fersa/ Shutterstock.com; nito/ Shutterstock.com

4-1b Contemporary Approaches in Biological Psychology More modern perspectives of the nervous system emerged from the work of scientists such as Nobel Prize–winning anatomist Santiago Ramón y Cajal (1852–1934) and neurologist John Hughlings Jackson (1835–1911). Ramón y Cajal’s work helped us understand the microscopic level of the nervous system, while Jackson’s conclusions illuminate the relationships among the larger structures of the brain. Surprisingly, it took scientists a long time to accept the idea that the nervous system was made up of separate cells just like other tissues in the body. Even in the late 19th century, scientists such as Camillo Golgi still argued that the nervous system was a single continuous network. Ironically, Ramón y Cajal used a microscopic stain invented by Golgi to prove him wrong. Using Golgi’s stain, Ramón y Cajal demonstrated conclusively that the nervous system was made up of separate cells, an idea that became known as the Neuron Doctrine. Both men shared the 1906 Nobel Prize in Physiology or Medicine. Phrenologists believed that “reading” the bumps on a person’s head, using a bust like this as a reference, could tell them about a person’s character.

Stephen Coburn/ Shutterstock.com Based on observations of his patients with seizure disorders, Jackson proposed that the nervous system is organized as a hierarchy, with progressively more complicated behaviours being managed by more recently evolved and complex structures (Jackson, 1884). We can see Jackson’s hierarchy at work when we observe people drinking alcohol. Alcohol specifically decreases the activity of parts of the brain involved with decision making. When a person has had too much to drink, the more complex social controls (e.g., knowing how close you should stand to a stranger) normally provided by higher level areas of the brain are diminished. Without the influence of these controls, people start doing things that they would not do while sober. This change in behaviour reflects the now-unrestrained influence of the more primitive parts of the brain involved with behaviours such as aggression and sexuality. You might, for example, pick a fight with someone when you normally think fighting is wrong. The aggression and sexuality were there all along, but the activity of the higher levels of the nervous system usually restricted their expression to more appropriate circumstances (Siever, 2008). Over the last 100 years, our understanding of the correlations between brain and behaviour leaped forward with continuing improvements in research methods, including many of those found in Table 4.1. In particular, methods that allow scientists to observe the activity of the living brain opened the door to the investigation of research questions that were impossible to study previously. While these imaging technologies are still far away from “mind reading,” researchers have been able to use imaging data to identify the visual content of a participant’s dreams (Horikawa, Tamaki, Miyawaki, & Kamitani, 2013) and distinguish between images remembered from one’s own experience and similar but novel images (Rissman, Chow, Reggente, & Wagner, 2016). Importantly, different types of research questions need to be addressed using different neuroscientific methods. For example, while some methods provide excellent information about the specific location of activity within the brain (referred to as spatial resolution), other methods do a better job of indicating the precise timing of activity in the brain (referred to as temporal resolution). Electroencephalogy (EEG) involves placing electrodes on the scalp of a research participant and recording the electrical activity of the brain. While this approach does not enable researchers to pin-point the exact location of brain activity (since it is reading information from the scalp) it does provide excellent information regarding the timing of brain activity. Functional magnetic resonance imaging (fMRI) involves placing participants in a machine that measures brain activity by detecting changes in cerebral blood flow. When neurons are active, they require the rapid delivery of blood and nutrients, and this is known as the hemodynamic response. With this method, researchers are able to identify specific regions of the brain that are relatively more or less active during particular tasks. However, because there is a time lag between brain activity and peak hemodynamic response, fMRI has poorer temporal resolution than EEG. Because of their complementary strengths and weaknesses, fMRI and EEG may be used together to provide a more complete picture of the neural underpinnings of various psychological phenomena. This functional magnetic resonance image (fMRI) was taken while one of your authors engaged in a finger-tap exercise, touching each digit of her right hand one by one with her thumb for 20 seconds followed by holding very still for 20 seconds. The red and yellow areas indicate parts of her brain that were selectively more active during the finger-tapping task than when she tried to stay very still.

Courtesy of Laura Freberg Table 4.1 Research Methods in Biological Psychology Research method Description What questions can we answer? Histology Microscopic examination of the nervous system. How does the structure of nervous system cells correlate with behaviour? Skin conductance response (formerly galvanic skin response) Measurement of electricity passed between two surface electrodes placed on the skin of the hand or finger. What is a person’s state of arousal? Electroencephalogram (EEG) Measurement of the brain’s electrical activity using electrodes placed on the scalp. What is a person’s state of arousal? Event-related potential (ERP) Measurement formed by averaging EEG responses to a stimulus, such as a light or tone. Did the person perceive the stimulus? Single cell recording Measurement of a single neuron’s activity obtained through a surgically implanted electrode. What types of stimulation make this neuron respond? Magnetoencephalography (MEG) Recording of the tiny amounts of magnetic output of the brain. What parts of the brain react to this stimulus? Positron emission tomography (PET) Measurement that uses the accumulation of radioactively tagged glucose or oxygen to identify activity levels in parts of the brain. What parts of the brain are active during a particular task? Functional magnetic resonance imaging (fMRI) Identification of active parts of the brain using magnetism to track the flow of oxygen. What parts of the brain are active during a particular task? Electrical stimulation Application of small amounts of electricity through a surgically implanted electrode. What behaviours occur if we stimulate this part of the brain? Optogenetics Genetically inserted light-sensitive proteins allow cells in the brain to be turned on with light. Which types of cells are active during particular behaviours? Transcranial magnetic stimulation (TMS) Application of magnetic fields to the brain through an instrument held near the scalp. What behavioural changes occur when magnetism is applied to the brain? Lesions Naturally occurring or deliberate damage to the brain. What behavioural changes are correlated with brain damage? Enlarge Table

© Cengage Learning Moving into the 21st century, the ranks of neuroscientists continue to grow, from 500 members of the Society for Neuroscience in 1969 to more than 40 000 members today (Society for Neuroscience, 2016). The Canadian Association for Neuroscience was formed in 1981 and currently consists of approximately 1000 researchers from across the country (Canadian Association for Neuroscience, 2019). The Montreal Neurological Institute, discussed in Chapter 1, is the largest specialized neuroscience research centre in Canada, and one of the largest in the world. Canada is a global leader in neuroscientific research, with researchers publishing not only a great volume of peer-reviewed articles, but high-impact articles that are cited frequently by other researchers (Lariviere et al., 2010). This surge in neuroscientific research over the past few decades has been coupled with an increase in the number of neuroscience courses and degrees offered at universities in Canada and around the world. It is very likely that your university psychology department offers a number of courses in neuroscience, and you might have a separate major or minor as well. As more scientists are trained in the neurosciences, we can look forward to continued innovations in both technology and our knowledge about the biological basis of mind. Connecting to Research Examining the Brain Activity of a Patient with Acquired Synesthesia Synesthesia, discussed in Chapter 5, is a condition where individuals experience a mingling of the senses. For example, the most common form of synesthesia is grapheme-colour synesthesia, in which the perception of numerals and letters is associated with the experience of colours. Most people with synesthesia have it from birth. However, a Canadian man known as Patient George is one of two people known to have acquired synesthesia after suffering damage to the thalamus (in his case, a stroke in the left thalamus). Nine months after his stroke, Patient George reported experiencing an intense sensory-emotional experience as a result of hearing the brass theme from James Bond. While listening to the theme song, he experienced feelings of ecstasy and felt as if his body were “riding the music.” Using fMRI techniques, a team of researchers led by Tom Schweizer at St. Michael’s hospital in Toronto, Ontario, sought to investigate the neural underpinnings of Patient George’s unique perceptual experiences (Schweizer, Li, Fischer, Alexander, Smith, Graham, & Fornazarri, 2013). The Question: While listening to the James Bond theme, what areas of Patient George’s brain become active, compared to control individuals? Methods While in an fMRI machine, researchers measured the brain activity of Patient George as he listened to the James Bond theme, as well as a similar piece of music that did not elicit any extreme emotional or out-of-body-like experiences. They also then measured the brain activity of six age- and education-matched neurologically healthy control participants as they listened to the same pieces of music. Ethics Patient George, as well as the control participants, would have provided their informed consent for participation in the experiment. In order to protect the privacy of individuals used in case study research, pseudonyms or initials are often used. Thus, “George” is not the real name of this individual, but it is how he is identified in academic and popular press reports. Results For the control participants, comparison of the brain activity in response to the Bond theme music versus the control music reveals basically no differences, as indicated by the lack of colourful pixels in Figure 4.1(b). However, for Patient George, compared to the areas of the brain that were active for the control music, hearing the Bond music led to greater neural activation in widespread areas of the brain including the auditory cortex, somatosensory cortex, motor cortex, thalamus, hippocampus, and particular regions of the insula, cerebellum, and prefrontal cortex (see Figure 4.1(a)). Figure 4.1 MRI Scan. Part (a) shows those areas of Patient George’s brain that become more active (red/orange pixels) while he listens to the James Bond theme as compared to acoustically similar music that does not elicit the synesthetic experiences he reports having during the Bond theme. Part (b) shows the averaged response of the neurologically healthy controls to the Bond theme as compared to the control music. It is clear that Patient George’s brain is responding to the Bond theme music in a nontypical manner.

Source: Schweizer, T.A., Li, Z., Fischer, C.E., Alexander, M.P., Smith, S.D., Graham, S.J., and Fornazarri, L. (2013). From the thalamus with love: A rare window into the locus of emotional synesthesia, Neurology, 81(5), tps://doi.org/10.1212/WNL.0b013e31829d86cc; permission conveyed via Copyright Clearance Center, Inc. Conclusions An examination of Patient George’s brain activity lends support the unique subjective experiences described by the patient. Rather than simply being a pleasant musical experience, which would activate the reward circuit of the brain, when Patient George listens to the Bond theme song, it activates areas of the brain that are commonly associated with intense emotional arousal, as well as sensory and motor areas of the brain that support the reported extracorporeal (“riding the music”) experience of the patient.

4-2 How Is the Nervous System Organized? The nervous system can be divided into two major components: the central and the peripheral nervous systems (PNS) (see Figure 4.3). The central nervous system (CNS) consists of the brain and the spinal cord, which extends from the brain down the back of the body. Although we often see the brain and the spinal cord referred to as separate structures, they form one continuous unit of tissue. You might think of the brain as having a tail known as the spinal cord. Nerves branch outward from the CNS to all areas of the body—the lungs, heart, and other organs; the eyes and ears; and the arms, legs, fingers, and toes. As soon as a nerve branches outward from the CNS, it is considered part of the peripheral nervous system (PNS). Another way to know you have left the CNS for the PNS is to look for the protection of bone. Nerves of the CNS are encased in bone, but those of the PNS are not. Figure 4.3 The Organization of the Nervous System. The nervous system has two major divisions: the CNS, containing the brain and the spinal cord, and the PNS, containing all nerves that exit the brain and the spinal cord. The CNS is protected by bone, but the PNS is not.

Argosy Publishing, Inc. To examine the relationships between nervous system and behaviour, we will first zoom in to look at the microscopic world of the nerve cells, or neurons, examining how these cells communicate with one another, and how the chemicals released during this process, neurotransmitters, affect behaviour. Later, we will zoom out to examine the larger view of the structures making up the nervous system. Talking about the connections between structures of the nervous system and behaviour requires a quick word of caution. As mentioned in Chapter 3, saying that we have a “gene for” a behaviour is overly simplistic. Saying we have a “centre for” a behaviour in the brain is equally misleading. Although we can identify structures that participate in certain behaviours, the biology of mind involves intricate and overlapping patterns of activity involving networks made up of richly connected structures. Psychology as a Hub Science The Centre for Aging + Brain Health Innovation Like all industrialized countries, Canada is experiencing an aging population. By 2030, almost one-quarter of the Canadian population will be seniors, defined as individuals aged 65 or older (Government of Canada, 2014). In response to this demographic shift, the Centre for Aging + Brain Health Innovation (CABHI) was established in 2015. The goal of the centre is to facilitate collaborations and innovations that will help improve quality of life for Canada (and the world’s) aging population. CABHI brings together people from health care, sciences, industry, and not-for-profit and government agencies, so that innovative ideas can be funded, tested, and implemented. CABHI has provided support for over 200 projects, including ElliQ, a socially assistive robot for older adults, and the Hippocamera, an app developed to help individuals with dementia retain memories.

Adapted from “Mapping the Backbone of Science,” by K. W. Boyack et al., 2005, Scientometrics, 64(3), 351–374. With kind permission from Springer Science+Business Media. Other projects are aimed at more specific at-risk populations, such as First Nations older adults living with dementia and their caregivers. Led by Carrie Bourassa from the University of Saskatchewan and Danette Starblanket from Morning Star Lodge (an Indigenous community-based health research lab) one CABHI-supported project is examining the effectiveness of an Indigenous languages app to see how engagement with the language games and quizzes stimulates brain activity. The researchers also want to determine the best ways of supporting the adoption of such technologies within the community. To this end, the researchers have partnered with community representatives from the First Nations groups in southern Saskatchewan who will be participating in the project (Ihilchik, 2019). The Hippocamera, an app intended to function as an external hippocampus to help people with memory loss, is one of the many projects supported in part by funding from CABHI. This project is led by Morgan Barense and a team of researchers at the University of Toronto, including graduate student Bryan Hong, pictured here showing the app to a senior during a CABHI event.

Gary Beechey and the Centre for Aging + Brain Health Innovation (CABHI) Diverse Voices in Psychology Integrative Approaches: Cultural Neuroscience Consistent with our Argument that multiple perspectives considered together produce a richer and more accurate analysis of the mind, we would like to introduce you to a combination of two seemingly distant fields—culture and neuroscience—to form cultural neuroscience. Cultural neuroscience has been defined as “an interdisciplinary field that examines how cultural and biological mechanisms mutually shape human behaviour across phylogenetic, developmental, and situational timescales” (Chiao, 2015, p. 283). In other words, cultural neuroscientists explore how genetics, brain structures, and cultures interact to shape behaviour (see Figure 4.2). Figure 4.2 Culture and the Brain. Scientists working in the new discipline of cultural neuroscience argue that brain activity can be different when people of different cultures complete the same task. In this ERP study, participants completed an emotional Stroop task while wearing EEG caps that recorded the electrical activity of the brain in response to particular stimuli. In this task, participants are presented with either emotionally congruent face-voice pairs (e.g., a sad face paired with a sad voice) or emotionally incongruent face-voice pairs (e.g., a sad face paired with a fearful voice). The job of the participants is to indicate as quickly as possible what emotion is being conveyed by either the face (face judgment task) or the voice (voice judgment task) while ignoring the other modality. The N400 is a particular ERP waveform that is part of the normal brain response to meaningful stimuli. Previous research has shown stronger N400s occur in response to unexpected or incongruent information, an effect known as the N400 incongruity effect. As shown in the results, North American participants showed a much larger N400 response on the incongruent voice judgment trials compared to the face judgment trials. Behavioural data from the Stroop task confirms that it is more difficult for the North American participants to ignore faces compared to voices. Chinese participants, in contrast, perform comparably on both tasks. One explanation for these cultural differences in multisensory emotion processing is the different display rules, or norms about emotional expression, that are followed in East Asian versus Western/North American cultures (a topic we will return to in Chapter 7).

Source: Liu, P., Rigoulot, S., & Pell., M. D. (2016). Cultural immersion alters emotion perception: Neurophysiological evidence from Chinese immigrants to Canada. Social Neuroscience, 12(6), 685–700. doi:10.1080/17470919.2016.1231713; permission conveyed via Copyright Clearance Center, Inc. Cultural neuroscience asks two main questions. First, how do cultural phenomena such as beliefs and values influence genetics and brain structures? We saw an example of this question in our discussion of the Waorani and Yanomamö tribes in Chapter 3. The Yanomamö cultural value of standing down between raids, not shared by the Waorani, led to changed reproductive success on the part of the most aggressive Yanomamö warriors. Among the Yanomamö, the most aggressive men had the most children, while among the Waorani, the least aggressive men had the most children. In other words, a difference in cultural values influenced the genetic make-up of subsequent generations of these groups. The second question asks how genetics and brain structure shape cultural phenomena. At a minimum, we might argue that forming cultures is something the human brain always seems to do. You will read about more specific examples of this type of process later in the textbook. In Chapter 12, we explore evidence suggesting that cultures with fewer individuals carrying the SS SERT genotype (also discussed in Chapter 3) are more likely to be individualistic than cultures where the SS genotype is more typical.

4-3 Neurons and Neurotransmitters: Electrochemical Communication We first zoom in to explore the microscopic building blocks of the nervous system, which are the nerve cells, or neurons. Human brains have about 100 billion neurons. To put this number into perspective, consider the following: If each neuron represented a second, ticking off the neurons in your body alone would take more than 3170 years. With each neuron forming an average of several thousand connections with other neurons, the connections in the human brain number in the hundreds of trillions. In addition to these large numbers of neurons, the nervous system contains many supporting cells, known as glia. Once you are familiar with the structure of neurons and glia, we will explore the ways neurons communicate with one another. Neural communication is a two-step process. The first step takes place within a single neuron and involves the generation of an electrical signal. The second step takes place between two neurons and involves the release of a chemical messenger from one neuron that affects the activity of the second. This is why the nervous system is often referred to as the body’s electrochemical communication system. Although it sounds like a script from a science fiction film, researchers are capable of growing neurons, such as these from the retina of the eye, on silicon chips. The chip electrically stimulates the growing neurons. Future uses of this type of technology might include brainlike computer networks and better prostheses for people who have lost a limb.

MPI Biochemistry/Volker Steger/Science Source

4-3a Neurons and Glia Neurons share many characteristics with other cells found in the body. Like other cells, a neuron has a large central mass or cell body, and within the cell body, it has a nucleus (see Figure 4.4). Most housekeeping tasks of the cell, such as the translation of genetic codes into the manufacture of proteins, take place in the cell body. Like other cells, neurons feature an outer membrane, which surrounds the neuron and forms a barrier between the fluid outside the cell (the extracellular fluid) and the fluid inside the cell (the intracellular fluid). The neural membrane is composed of fatty materials that do not dissolve in water, so even though it is only two molecules thick, it is able to hold apart the water-based fluids on either side. Pores within the membrane act as channels that allow chemicals to move into or out of the cell. Figure 4.4 The Neuron. Neurons share many features with other living cells but are specialized for the processing of information. (a) Parts of the neuron. Like other types of animal cells, the neuron features a nucleus in its cell body and a fatty membrane that separates intracellular and extracellular fluids. Unlike most other cells, the neuron has specialized branches, the axon and the dendrites, that pass information to and receive information from other cells. (b) A closeup view of the axon membrane. A thin, oily membrane separates the intracellular fluid inside the neuron from the extracellular fluid outside the neuron. Pores spanning the membrane act as channels that allow ions to move into and out of the neuron. (c) A closeup view of the axon terminal. Within the axon terminal are synaptic vesicles, which contain chemical messengers called neurotransmitters that transmit signals between neurons. Later in the chapter, we’ll see how these neurotransmitters communicate with receptors on the dendrites of other neurons.

Enlarge Image

Argosy Publishing, Inc. Unlike other types of body cells, neurons have two types of branches that extend from the cell body to allow the neuron to perform its information-processing and communication functions. The branches known as axons are responsible for carrying information to other neurons, while the branches known as dendrites receive input from other neurons. Although neurons may have many dendrites, each neuron typically has only one axon. Many axons communicate with immediately adjacent cells and are, therefore, only small fractions of a millimetre in length, but other axons are much longer. When you stub your big toe on a rock, the neurons that process this information have cell bodies in your lower back and axons that extend all the way down to your sore toe, a distance of about 0.9 metre (around 3 feet), depending on your height. At its farthest point from the cell body, an axon bulges to form a terminal. If you look inside an axon terminal with an electron microscope, you can see round, hollow spheres known as synaptic vesicles, which contain molecules of chemical messengers. We have been using the term white matter to describe pathways formed by nerve fibres or axons. You have probably heard the term grey matter as well. Now that you understand the structure of neurons, these terms will make more sense. When we prepare neural tissue for study using microscopes, the chemicals used to preserve the tissue are absorbed by cell bodies. This gives cell bodies a pink–grey colouring. In contrast, these chemicals are repelled by the insulating material covering most axons because the insulation has a fatty composition that doesn’t mix well with the watery preservatives (we discuss the nature of this insulation shortly). As a result, axons look white, like the fat in a steak. When we examine images of the brain, areas that look grey have a high density of cell bodies, whereas areas that look white consist of large bundles of axons. If neurons are the stars of the nervous system team, glia are the trainers, coaches, and scorekeepers. They make it possible for neurons to do their job effectively. Some glia (from the Greek word for “glue”) provide a structural matrix for neurons, ensuring that the neurons stay in place (see Figure 4.5). Other glia are mobile, allowing them to move to a location where neurons have been damaged to clean up debris. Glia form tight connections with the blood vessels serving the nervous system. This forms a blood–brain barrier that prevents many toxins circulating in the blood from exiting into brain tissue where neurons could be harmed. Psychoactive drugs, by definition, are substances capable of penetrating the blood–brain barrier with ease. We discuss psychoactive drugs and the ways in which they act on the nervous system in Chapter 6. Figure 4.5 The Blood–Brain Barrier. Glia form tight connections with the blood vessels in the nervous system, preventing many toxins from entering the brain. Glia also help hold neurons in place and form the myelin on some axons.

Argosy Publishing, Inc. The blood–brain barrier might offer too much protection to the brain in some cases. Many chemotherapy agents used to treat cancer in other parts of the body cannot penetrate the blood–brain barrier, which complicates the treatment of tumours in the brain. In vertebrates such as humans, glia wrap around some axons like sausages on strings at a delicatessen, forming an important layer of insulation called myelin. Myelin makes neural signalling fast and energy efficient. We will discuss how myelin accomplishes this when we discuss neural signalling later in the chapter. By speeding up the transmission of neural signals and contributing to quicker recovery between signals, myelin increases the amount of information a neuron can transmit per second by a factor of 3000 times (Giedd et al., 2015). Not all axons in the human nervous system are myelinated. When you hurt yourself, the fast, sharp “ouch” message is carried to the brain by myelinated axons, but the dull, achy message that lasts a lot longer is carried by unmyelinated axons. One type of glia forms the myelin in the brain and the spinal cord, and a second type forms the myelin in the remainder of the nervous system (see Figure 4.6). These two types of glia behave quite differently from each other when they are damaged. Glia in the brain and the spinal cord form scar tissue, inhibiting repair to the damaged nerves. Because of this feature, we consider damage in the CNS to be permanent. Considerable research is under way to figure out how to repair such damage, including work using stem cells to grow bridges across the damaged areas. In contrast, damaged glia in the PNS do not form scar tissue and instead help the damaged axons regrow. As a result, nerve damage in these areas can heal. If this were not so, operations to reattach limbs would be doomed to failure. Today, not only are digits and even limbs that were lost in accidents routinely reattached to their rightful owners, but a number of patients whose own hands or faces were damaged beyond repair have undergone successful transplants from cadavers (Clarke & Butler, 2009; Dubernard, Owen, Lanzetta, & Hakim, 2001). Figure 4.6 Glia Form Myelin. One type of glia forms myelin in the CNS, and a second type forms myelin in the PNS. These types of glia respond differently to nerve damage, making nerve damage outside the brain and spinal cord easier to repair.

Enlarge Image

Argosy Publishing, Inc. As we explore further in Chapter 11, myelin growth in the human nervous system begins before birth, but it is not completed until early adulthood, possibly as late as age 25. The last area of the nervous system to be myelinated is the prefrontal cortex, which is involved with judgment and morality (Hayak et al., 2001). Until myelin in this area is mature, these neurons do not work as efficiently, which is one of the possible reasons teenagers and adults sometimes make different decisions (Baird et al., 1999). You may recall some experiences from your early teens that appear shocking and overly risky to your adult brain. Worse yet, as you move through your 20s, you might find yourself agreeing more frequently with your parents. Maurice Desjardins is the first individual in Canada to receive a face transplant. In 2011, a hunting accident left Desjardins without a jaw, nose, or teeth (as pictured on the left). Despite numerous reconstructive surgeries, he spent years without being able to properly breathe or eat. He was in constant pain from damaged nerves in his face. And the extent of his disfigurement also caused social pain and isolation. In 2018, a team at Montreal’s Maisonneuve-Rosemont hospital completed the 30-hour facial transplant procedure. This type of operation would be useless without the neurons’ ability to form new connections. Doctors estimated that it would take about a year for Desjardins to relearn basic skills such as eating, drinking, and smiling, with his new face.

THE CANADIAN PRESS/Graham Hughes

4-3b Neural Signalling Now that we have a working knowledge of the structure of neurons, we are ready to talk about how they function. A neuron is a sophisticated communication and information-processing system that receives input, evaluates it, and decides whether to transmit information to neurons downstream. Its actions are similar to your own when you receive a juicy bit of gossip from a friend and then decide whether to tell somebody else. As we mentioned earlier, neural communication is a two-step process. In the first step, which takes place in the signalling neuron’s axon, the neuron generates an electrical signal known as an action potential. This signal travels the length of the axon from its junction with the cell body to its terminal. In the second step, which takes place between two neurons, the arrival of an action potential at the axon terminal of the first neuron signals the release of chemical messengers, which float across the extracellular fluid separating the two neurons. These chemicals influence the likelihood that the second neuron will respond with its own action potential, sending the message along. Electrical Signalling The production of action potentials can be demonstrated using axons dissected from a squid and placed in a tub of seawater, which has a chemical composition similar to the fluid surrounding our body cells (Hodgkin & Huxley, 1952). Of all the possible sources of axons on the Earth, why choose squid? Certain axons from a squid can be as much as 1 millimetre in diameter, large enough to see with the naked eye. The squid axon is also large enough that you can insert a recording electrode into its interior without disrupting its function. The readings from inside the axon can then be compared with readings from a recording electrode placed in the seawater. When a neuron is not processing information, we say that it is at rest. When a cell is at rest, the difference between the readings from the interior of the axon and the external fluid is known as the resting potential. Our recording will show that the interior of the neuron is negatively charged relative to its exterior due to the different chemical composition of the intracellular and extracellular fluids. Myelination of the human nervous system takes more than 20 years to complete.

Enlarge Image

Tau, G. Z., & Peterson, B. S. (2010). “Normal development of brain circuits.” Neuropsychopharmacology Reviews, 35, 147–168. Let’s assume that our resting neuron now begins to receive chemical messages from another neuron, a process we discuss in more detail shortly. Neurons can respond to incoming chemical signals by becoming either depolarized or hyperpolarized. The word polarized means “far apart,” such as when political factions disagree. Being depolarized means we have moved closer together, and being hyperpolarized means we have moved even farther apart than before. In the case of neurons, depolarization means that the difference between the electrical charges of the extracellular and the intracellular recordings is decreasing. Hyperpolarization means that the difference is increasing. Some squid axons are large enough to be seen with the naked eye and remain active in a bath of seawater for hours. These features make studying neural activity in a squid axon relatively simple.

© Cengage Learning When a neuron is depolarized by sufficient input, it reaches a threshold for producing an action potential. A threshold is the point at which an effect, the action potential in this case, is initiated. Once this threshold is reached, the generation of an action potential is inevitable. Approaching the threshold for initiating an action potential is similar to pulling the trigger of a gun. As you squeeze the trigger, nothing happens until you reach a critical point. Once that critical point is reached, the gun fires, and there is nothing you can do to stop it. Reaching threshold initiates a sequence of events that reliably produces an action potential (see Figure 4.7). These events involve the opening and the closing of pores or channels in the neural membrane, which in turn allow certain chemicals to move into and out of the cell. These chemicals are in the form of ions or electrically charged particles dissolved in water. When threshold is reached, channels open, allowing one type of ion, sodium, to rush into the neuron. Because sodium ions carry a positive electrical charge, we can see their movement reflected in a steep rise in our recording of the difference between the internal and the external electrodes. At the peak of the action potential, our recording has reversed itself from the resting state. Now the interior of the cell is more positively charged than the outside. Figure 4.7 The Action Potential. Once threshold is reached, an action potential is triggered. The movement of sodium and potassium ions across the axon membrane is reflected in the rise and fall of our recording, respectively. A refractory period follows each action potential, and triggering another action potential during this time is more difficult.

© Cengage Learning Near the peak of the action potential, channels open that allow another type of ion, positively charged potassium, to move across the membrane. Potassium begins to leave the cell. As the interior loses these positively charged potassium ions, our recording heads in the negative direction again. Following the production of the action potential, the neuron requires a time-out or refractory period, during which it returns to its resting state. During this refractory period, the cell is unable or unlikely to respond to further input by producing another action potential. The size and shape of action potentials are always the same, whether we’re recording them in a squid or in a human. You won’t see recordings of short, fat action potentials or tall, skinny ones. Either an action potential occurs, or the cell remains at rest—there is no middle ground. Because of this consistency, we say that action potentials are all or none. Action potentials do not affect the entire axon all at once. The process we just described takes place first in a small segment of the axon where the axon connects to the cell body. The next step is propagation, or the duplication of the electrical signal down the length of the axon to the axon terminal, where it initiates the release of chemical messengers. We mentioned earlier that myelinated neurons enjoyed some advantages in efficiency and speed, and we are now ready to discuss why that is the case. Propagation takes place differently in myelinated and unmyelinated axons. In an unmyelinated axon, action potentials occur step by step, from one small section of the axon to the next adjacent section, down the entire length of the axon. In contrast, action potentials in myelinated axons are formed only at the sections of the axon membrane between adjacent segments of myelin, known as nodes of Ranvier. In other words, propagation in myelinated axons can “skip” the sections covered by myelin. You might think about propagation in unmyelinated versus myelinated axons as being similar to shuffling your feet versus taking long strides. Which covers the most ground faster and more efficiently? Sushi made from puffer fish, known as fugu, is a delicacy, but when prepared poorly, it can result in sickness or death. Chefs who prepare fugu undergo extensive training and licensing in Japan. The puffer fish toxin blocks the movement of sodium into cells, making electrical signalling impossible. As a result, diners who eat poorly prepared fugu can become paralyzed and suffocate to death.

JOE SCHERSCHEL/National Geographic Creative Propagation in unmyelinated axons works well, as evidenced by the wealth of invertebrate life on the Earth, from the snails in your garden to the giant squid of the oceans. These animals survive with no myelin, but their neural communication is not fast or energy efficient compared to ours. Forming action potentials at each section down the length of the axon is time consuming, like taking the local bus that stops at every block. In addition, cleaning up after all these action potentials uses a lot of energy (Swaminathan, Burrows, & McMurray, 1982). The more action potentials it takes to move a signal down the length of the axon, the more energy is expended returning the cell to its resting state. Propagation in myelinated axons is fast and efficient (see Figure 4.8). After an initial action potential is generated near the cell body, the current flows beneath a segment of myelin until it reaches a node of Ranvier, where another action potential occurs. Like the express bus, the action potentials skip the myelinated sections of the axon, reaching their destination, the axon terminal, about 20 times faster than if the axon were unmyelinated. By covering the same distance with fewer action potentials, the myelinated axon uses less energy returning to the resting potential than an unmyelinated axon would need. Figure 4.8 Propagation of the Action Potential. Action potentials move down the length of the myelinated axon more quickly than they move down an unmyelinated axon.

Enlarge Image

Argosy Publishing, Inc. Once the action potential reaches the axon terminal, the neural communication system switches from an electrical signalling system to a chemical signalling one. Chemical Signalling The point of communication between two neurons is known as a synapse. At the synapse, neurons do not touch one another physically. Instead, they are separated by tiny gaps filled with extracellular fluid. Because electrical signals are unable to jump this gap, neurons send chemical messengers instead. These chemical messengers are called neurotransmitters (see Table 4.2). Table 4.2 Important Neurotransmitters Neurotransmitter Behaviours influenced by the neurotransmitter Acetylcholine (ACh) * Movement

      * 

Memory

         * 

Autonomic nervous system function Epinephrine (adrenalin) * Arousal Norepinephrine (noradrenalin) * Arousal

                  * 

Vigilance Dopamine * Movement

                        * 

Planning

                           * 

Reward Serotonin * Mood

                                 * 

Appetite

                                    * 

Sleep Glutamate * Excitation of brain activity GABA * Inhibition of brain activity Endorphins * Pain Enlarge Table

© Cengage Learning Figure 4.9 illustrates the sequence of events triggered by the arrival of an action potential at an axon terminal. The neurotransmitters in the axon terminal are contained in synaptic vesicles. The arrival of an action potential releases the vesicles from their protein anchors, much like boats leaving a dock, and the vesicles migrate rapidly to the cell membrane. Because the vesicles are made of the same thin, oily material as the membrane, they easily fuse with the membrane and spill their contents into the synaptic gap, similar to popping soap bubbles in a bathtub. Following release, the vesicles are pinched off the membrane and refilled. A node of Ranvier, located between two adjacent segments of myelin, is rich in sodium channels, which makes the formation of action potentials at the node possible.

The Science Picture Company/Alamy Stock Photo Figure 4.9 Chemical Signalling. Because most neurons are separated from one another by extracellular fluid, the action potential cannot jump from one neuron to the next. To cross the gap between neurons, chemical signals are used instead. (a) Neurotransmitter release. The arrival of an action potential at the axon terminal triggers a sequence of events that results in the release of neurotransmitter molecules, which float across the synaptic gap to interact with receptors on the receiving neuron. (b) Reuptake. After interacting with receptors, neurotransmitter molecules are often recaptured by the neuron that released them to be recycled and used again later.

Enlarge Image

Argosy Publishing, Inc. The neurotransmitters released across the synaptic gap come into contact with special channels on the receiving neuron, known as receptors. Receptors work with the neurotransmitters like locks and keys. Only a neurotransmitter with the right shape (the key) can attach itself or bind to a particular receptor (the lock). Neurotransmitters do not stay bound to receptors long. Once they pop out of the receptor binding site, neurotransmitter molecules drift away from the gap, are broken down by enzymes, or return to the axon terminal from which they were released in a process called reuptake. In reuptake, special channels in the axon terminal membrane known as transporters allow the neurotransmitters to move back into the releasing neuron where they are repackaged for later use. Many important drugs, including the antidepressant drug fluoxetine (Prozac), interfere with or inhibit this reuptake process. The interaction between neurotransmitters and their receptors can have one of two effects on the receiving neuron: excitation or inhibition. When a neurotransmitter has an excitatory effect, it slightly depolarizes the receiving neuron, increasing the likelihood that the neuron will reach threshold and initiate an action potential. Recall that depolarization reduces the difference between the electrical environments inside and outside the neuron. When a neurotransmitter has an inhibitory effect, it slightly hyperpolarizes the receiving neuron, moving the cell farther from threshold and reducing the likelihood that it will initiate an action potential. Recall that hyperpolarization increases the difference between the electrical environments inside and outside the neuron. Excitatory messages seem logical. One neuron is telling another to “pass the message along.” Inhibitory messages, however, seem somewhat strange at first glance. Why would our nervous systems need a message that says, “Don’t pass the message along”? Tetanus, for which you probably have been vaccinated, provides a dramatic example of what can happen when inhibition doesn’t work properly. The toxin produced by the bacteria responsible for tetanus selectively damages inhibitory neurons in the parts of the nervous system that control muscle contraction. Normally, excitatory inputs that contract muscles coordinate their activity with inhibitory inputs that tell muscles to relax, allowing the steady hands we need to put in a contact lens, for example. Without the input of the inhibitory neurons, the system is left with excitation only, and the result is the extreme muscle contraction that gives tetanus its other name—“lockjaw” (see Figure 4.10). Figure 4.10 Tetanus Blocks Motor Inhibition. The severe muscle contraction that gives tetanus its nickname of lockjaw results from the toxin’s blocking the release of inhibitory neurotransmitters in the motor systems of the brain and spinal cord. (a) Normally, inhibitory input balances contraction to maintain smooth movements. (b) Without the counterbalance of inhibition, too much muscle contraction occurs.

© Cengage Learning You might be wondering, “How do we know what the different neurotransmitters do?” Much of our knowledge about neurotransmitters is due to experimentation with various drugs and toxins. Agonists are drugs that enhance the actions of neurotransmitters, and antagonists are drugs that inhibit the actions of neurotransmitters. An agonist might enhance the actions of a particular neurotransmitter by increasing its release, blocking its reuptake, or by mimicking the neurotransmitter and activating its postsynaptic receptor. For example, cocaine prevents the dopamine transporter protein from being able to perform its normal reuptake function, resulting in an accumulation of dopamine in the synaptic cleft. Increased activation of postsynaptic dopamine receptors leads to the reinforcing effects of cocaine. In contrast to an agonist, an antagonist might inhibit the actions of a particular neurotransmitter by blocking its release, destroying the neurotransmitter in the synapse, or by mimicking the neurotransmitter and binding to a postsynaptic receptor in a way that prevents neurotransmitter binding. For example, beta blockers, a class of drugs commonly prescribed to individuals who have suffered a heart attack, are antagonists that block certain receptor sites for epinephrine and norepinephrine, weakening the effects of the sympathetic stress response (described later in this chapter) on the heart. Two axons (in purple) are forming synapses or junctions where communication will occur with a neuron’s cell body (in yellow).

Eye of Science/Science Source The SSRI label for some antidepressant medications, including Prozac, stands for selective serotonin reuptake inhibitor. People who are depressed often have lower-than-normal serotonin activity at the synapse. If you inhibit reuptake of serotonin, more molecules remain in the synaptic gap longer where they can continue to interact with receptors. Serotonin activity increases, relieving depression, because we get more “bang for the buck” each time serotonin is released. Synapses usually occur in many locations on the dendrites or cell body of the receiving neuron, and the depolarizing or hyperpolarizing current that results from neurotransmitter activity at these synapses drifts to the junction of the cell body and axon. If there is sufficient depolarization to reach threshold at this junction, the neuron generates an action potential. If not, it remains at rest. The neuron’s “decision” to generate an action potential or not is called summation; the neuron is adding up all incoming messages and making a decision based on that information. The neuron’s task is not unlike the situation we face when we ask friends and family for advice. We will receive some excitatory advice (Go for it!), along with some inhibitory advice (Don’t even think about it!). Our job, like the neuron’s, is to sum our input and make a decision. Unlike us, however, the neuron cannot disregard the advice it receives.

4-3c Types of Neurotransmitters Researchers have identified more than 50 chemicals that serve as neurotransmitters. Table 4.2 lists some neurotransmitters that are particularly interesting to psychologists, and we will highlight a few of these in this section. In Chapter 6, we explore examples of psychoactive drugs, both therapeutic and recreational, that interact with the normal biochemistry of the nervous system. Acetylcholine (ACh) is a neurotransmitter found in many systems important to behaviour. ACh is found at the neuromuscular junction, the synapse at which the nervous system commands muscles. Interference with the action of ACh at the muscles can result in paralysis and death, making drugs that act on ACh popular for use as pesticides and as bioweapons. ACh also serves as a key neurotransmitter in the autonomic nervous system (discussed later in this chapter), which carries commands from the brain to the glands and organs. ACh is also intimately involved in the brain circuits related to learning and memory. These brain circuits are among the first to deteriorate in Alzheimer’s disease. Not surprisingly, memory deficits are among the earliest symptoms of Alzheimer’s disease to appear. Among the many drugs that act on ACh systems is the nicotine found in tobacco. A Yagua hunter in Peru loads a dart tipped with curare into the mouthpiece of his blowgun. Curare is derived from native plants and causes paralysis by blocking receptors for acetylcholine (ACh) at synapses between the nervous system and the muscle fibres.

Jack Fields/Science Source Norepinephrine activity in the brain leads to arousal and vigilance. Consistent with this role in arousal, norepinephrine is also released by the sympathetic nervous system. As we observed previously, the sympathetic nervous system prepares us to react to emergencies by providing necessary resources, such as the extra oxygen that is needed to run or throw a punch. Abnormalities in norepinephrine activity have been implicated in several psychological conditions that feature disturbances in arousal and vigilance, including bipolar disorder and post-traumatic stress disorder (PTSD), discussed in Chapter 14. Dopamine is involved with systems that govern movement, planning, and reward. Parkinson’s disease, which makes normal movement difficult, results when dopamine-releasing neurons in the brain’s movement circuits begin to die. In addition, dopamine participates in the brain’s reward and pleasure circuits by becoming active whenever we engage in behaviours that promote survival and successful reproduction, such as eating a great meal or having sex. Most drugs that produce addiction, including cocaine and methamphetamine, stimulate increased activity in dopamine circuits. In Chapter 14, we will see how disruptions to dopamine circuits have been implicated in schizophrenia and attention deficit hyperactivity disorder (ADHD). Botox, which is used to treat muscle spasms or reduce wrinkles, is made from an inactive form of the toxin responsible for botulism, which is produced by bacteria that spoil food. Botox interacts with ACh by preventing its release from the axon terminal. Without the activity of ACh telling a muscle to contract, the muscle remains paralyzed. Serotonin is involved with systems regulating sleep, appetite, mood, and aggression. Consequently, these behaviours are tightly linked. As we will see in Chapter 14, people who experience depressed mood also show abnormalities in appetite and sleep. Sleep deprivation can result in changes in mood and appetite, even leading to significant overeating (Spiegel, Tasali, Penev, & Van Cauter, 2004). Endorphins, short for endogenous morphine or morphine produced by the body, modify our natural response to pain. In evolutionary terms, it makes sense to have a system that reduces your chances of being disabled by pain during an emergency. All too frequently, however, we underestimate the extent of our injuries until we wake up the next morning feeling sore. “Runner’s high,” in which people who regularly engage in endurance sports experience a sense of well-being and reduced sensation of pain, results from the release of endorphins initiated by high levels of activity. Opioid drugs such as morphine, heroin, and oxycodone (OxyContin) produce their pain-relieving effects by mimicking the action of endorphins at the synapse. In other words, the opioid drugs are so similar in chemical structure to our natural endorphins that the receptors cannot tell them apart and treat the opioids as if they were natural endorphins. This image highlights areas of the human brain that are rich in receptors for endorphins, our natural opioids. The red areas have the most endorphin receptors, followed by the yellow areas. Opioid drugs, such as heroin or morphine, affect our behaviour by interacting with these receptors.

University of Michigan Health System

4-4d Neurogenesis and Neuroplasticity: Your Changing Brain It has long been known that neurons can die, either as part of normal brain development (see Chapter 11) or through injury, disease, toxins, and so on. And for a long time, it was believed that these neurons could not be replaced—that neurogenesis, the creation of new neurons, only occurred during embryonic development and did not continue throughout the life span (e.g., Hutchins & Barger, 1998). However, we now know that adult neurogenesis does occur, and that new neurons are continuously generated by stem cells in two regions of the adult mammal (including human) brain (Eriksson et al., 1998; Ernst & Frisén, 2015): the hippocampus (a subcortical region of the brain described later in this chapter) and the olfactory bulb (a region located above the nasal cavity and involved in the sense of smell, discussed in Chapter 5). Importantly, neurogenesis is not just a mechanism for neuron replacement, but also contributes to the plasticity of the adult brain (Gu, Janoschka, & Ge, 2012). Neuroplasticity refers to the ability of neurons to change in structure and function in response to alterations in their environment. Newly generated neurons enjoy a brief period of enhanced plasticity, and neurons generated by the adult brain may play a critical role in experience-induced plasticity (Ge, Yang, Hsu, Ming, & Song, 2007). For example, in Chapter 9 you will learn about long-term potentiation, a process that enhances communication between two neurons and plays an important role in learning and memory. Newly generated neurons in the adult brain exhibit enhanced long-term potentiation and may thus provide a mechanism that enables the brain to remain plastic in response to experiences throughout the life span (Boldrini et al., 2018; Ge et al., 2007). Examining adult neurogenesis in humans is not an easy task, and the limitations of current research methods have left many questions unanswered (Kempermann et al., 2018). However, this is a fascinating area of research that promises to enhance our understanding of how our brains are capable of change and how this change contributes to learning and cognition throughout the life span. Summary 4.1 Neural Communication

Structure
What to remember
   

MPI Biochemistry/Volker Steger/Science Source Neuron * Cell body: Contains the nucleus and carries out most housekeeping functions. * Axon: Used to send information. * Dendrite: Used to receive information.

Argosy Publishing, Inc. Glia

4-4 What Are the Structures and Functions of the Central Nervous System? The appearance of the human brain is not particularly impressive. It is covered in wrinkles, measures about 14 centimetres wide, 16.5 centimetres long, and 9.3 centimetres high, and weighs about 1.5 kilograms (3 pounds). The brain contains about 86 billion nerve cells, which make trillions of connections. The spinal cord contains about 1 billion nerve cells, reaches a length of 46 centimetres in men and 43 centimetres in women, and weighs about 35 grams (1.2 ounces). Its diameter ranges from about 1 centimetre to 1.5 centimetres. The spinal cord is shorter than your spine. Your bony spinal column continues to grow between birth and adulthood, but the spinal cord itself does not. The brain and the spinal cord are among the best protected parts of your body, which is not surprising given their importance for your survival. Surrounding the brain and the spinal cord are the heavy bones of the skull and spinal vertebrae. Under these bones, membranes known as meninges provide further protection. Infections of these membranes result in potentially life-threatening cases of meningitis, for which you were possibly vaccinated before beginning your university studies. Meningitis is the inflammation of the membranes (“meninges”) covering the brain and the spinal cord. This condition can result from infection with bacteria, viruses, or fungi, with the bacterial infections being the most dangerous. Fortunately, most cases of bacterial meningitis can be prevented by vaccination. This image shows the distortion of the membranes caused by infection.

Medical Body Scans/Science Source The brain and the spinal cord are further protected by a clear, plasma-like fluid known as cerebrospinal fluid (CSF). CSF seeps out of the lining of hollow spaces in the brain known as the ventricles (see Figure 4.11). Near the base of the skull, openings enable CSF to flow from the ventricles into a space within the meninges, allowing the fluid to flow around the outer surfaces of the brain and spinal cord. CSF is constantly produced, so blockages in its circulation cause the fluid to build up. The result is hydrocephalus, which means “water on the brain.” Figure 4.11 The Ventricles of the Brain. The ventricles of the brain are filled with circulating cerebrospinal fluid (CSF), which floats and cushions the brain.

Argosy Publishing, Inc. The cushioning provided by the CSF limits the damage produced by a blow to the head. As a result, single minor concussions are unlikely to produce long-term problems, but medical experts are becoming increasingly concerned about the effects of multiple concussions (Mannix, Meehan III, & Pascual-Leone, 2016). The CSF also “floats” the brain within the skull, preventing false signals that might result from the weight of some neurons pressing down on others. To diagnose some medical conditions, it is helpful to obtain a sample of CSF. This is done through a spinal tap, in which a physician removes some of the CSF circulating through the meninges surrounding the spinal cord. Normal Pressure Hydrocephalus (NPH). NPH occurs when too much cerebrospinal fluid accumulates in the ventricles in the brain, as can be seen in the image above. NPH affects more than 1 in 200 adults over the age of 55, for reasons that are not well understood. Unfortunately, the symptoms of NPH are similar to many other diseases that afflict older adults, including Parkinson’s disease and Alzheimer’s disease, and it is often misdiagnosed as one of these more serious conditions. However, unlike Parkinson’s and Alzheimer’s, NPH is potentially reversible, and if diagnosed early enough, it is possible to stop the damage caused by the build-up of cerebrospinal fluid and greater increase the quality of life of affected individuals. Increasing awareness of NPH is one of the mandates of the nonprofit organization Hydrocephalous Canada.

Science Photo Library/Alamy Stock Photo; Science History Images/Alamy Stock Photo

4-4a The Spinal Cord, Brainstem, and Cerebellum The spinal cord extends from the lowest part of the brain down into the middle of your back (see Figure 4.12). If you feel the back of your skull where it curves to meet your spine, your fingers will be just below the junction of the spinal cord and the lowest structure of the brain. Although the spinal cord is only 2 percent of the weight of the CNS, its functions are vital, as evidenced by the challenges faced by people with spinal damage. The spinal cord serves as a major conduit for information flowing to and from the brain along large bundles of nerve fibres, carrying sensory information from the body and delivering commands to muscles. A total of 31 pairs of spinal nerves exit the spinal cord between segments of the bony vertebrae in your spine to serve the body. Figure 4.12 The Spinal Cord and the Spinal Nerves. Thirty-one pairs of spinal nerves exit between the bones of the vertebrae to bring sensory information back to the CNS and carry motor commands to muscles.

Enlarge Image

Argosy Publishing, Inc. Many important reflexes are initiated by the spinal cord without any assistance from the brain. One type of spinal reflex makes you pull your body away from a source of pain. It doesn’t take long for your hand to fly up when you’ve touched something hot on the stove. When tapping your knee with a hammer during a routine physical, your doctor is checking another type of spinal reflex, the knee-jerk reflex (see Figure 4.13). This reflex is interesting to your doctor because certain medical conditions, such as diabetes, affect the strength of the reflex. Still other spinal reflexes help us stand and walk. Figure 4.13 Checking Spinal Reflexes. When your physician taps on your knee, your thigh muscle stretches. Information about the stretch is carried to the spinal cord by a sensory nerve. The spinal cord sends a command to the muscle to contract to counteract the stretch, and your foot kicks out. The spinal cord manages this reflex alone. No higher level of processing in the nervous system is necessary for this reflex to occur.

Argosy Publishing, Inc. Spinal reflexes give us an opportunity to look at the functions of three types of nerve cells, or neurons. Sensory neurons carry information from the external environment or from the body back to the CNS. In the knee-jerk reflex, sensory neurons tell the spinal cord that a muscle has been stretched by the tap of the hammer. Motor neurons carry commands from the CNS back to the muscles and glands of the body. In response to information about the stretched muscle, the spinal cord sends a message through motor neurons back to your leg, telling the muscle to contract to counteract the stretch. You know what happens next—your foot kicks as the muscles contract. Neurons that have neither sensory nor motor functions are called interneurons. Inter in this case means “between,” because many interneurons form bridges between sensory and motor neurons. The knee-jerk reflex forms a simple arc between a sensory neuron and a motor neuron and does not require interneurons. However, interneurons play important roles in other reflexes and throughout the nervous system. Moving up from the spinal cord brings us to the brainstem. Early in prenatal development, the emerging brain forms three bulges. The most forward of these bulges develops into the two large cerebral hemispheres, which we discuss in a later section. The remaining two bulges form the brainstem. If you examine Figure 4.14, you can see that the brainstem looks like the stem of a flower, supporting the larger blossom of the cerebral hemispheres. Directly branching from the brainstem are the cranial nerves, which perform the same functions for the head and neck areas that the spinal nerves manage for the remainder of the body. We will discuss the cranial nerves in more depth in a later section on the PNS. Most sobriety tests assess the function of the cerebellum, which helps us maintain balance and muscle coordination.

Charlie Neuman/ZUMA Press/Corbis Figure 4.14 Structures of the Brainstem. The brainstem contains structures responsible for reflexive behaviours, heart rate, breathing, arousal, sleep, preliminary sensory analysis, balance, and movement.

Enlarge Image

Argosy Publishing, Inc. The spinal cord merges with our first brainstem structure, the medulla. Like the spinal cord, the medulla contains large bundles of nerve fibres travelling to and from higher levels of the brain. The medulla manages many functions essential to life, such as heart rate, breathing, and blood pressure. Just above the medulla is the pons, which contains structures involved with the management of sleep, arousal, and facial expressions. Pons means “bridge” in Latin. The pons not only serves as a bridge between the higher and lower portions of the brain, but it also connects the cerebellum to the rest of the brain. Essential for maintaining balance and motor coordination, the cerebellum is one of the first structures in the brain to be affected by alcohol. As a result, alcohol consumption impairs balance (walking a straight line) and motor coordination (touching your finger to the tip of your nose with your eyes closed). Most sobriety tests are the same tests a neurologist would use to assess the function of the cerebellum. Surprisingly, the cerebellum contains more nerve cells than the rest of the brain combined. Not only does the cerebellum contain huge numbers of neurons, but it is also richly connected with the rest of the CNS. Because of the cerebellum’s position on the brainstem, which is relatively ancient in terms of evolution compared to the cerebral hemispheres, neuroscientists initially underestimated its importance to human behaviour. They believed that the cerebellum’s activities were restricted to managing the timing and strength of movements. While we still do not know exactly what the cerebellum does, today’s neuroscientists believe that it has a broader role in making mental and motor skills more automatic. Damage to the human cerebellum produces subtle deficits in language, cognition, and perception. In autism spectrum disorder, a condition that affects language, sensory, and social behaviours, abnormalities in the cerebellum are common (Courchesne, 1997; Fatemi et al., 2012). For example, one study comparing male children with and without autism found that the right cerebellum of the males with autism was less structurally complex (had a flatter surface area) than those without autism (Zhao, Walsh, Long, Gui, & Denisova, 2018). Different regions of the cerebellum also appear to play a role in the psychopathology of PTSD (Rabellino, Densmore, Theberge, McKinnon, & Lanius, 2018). Opiate painkillers such as morphine and OxyContin produce some of their analgesic effects by interacting with opioid receptors in the periaqueductal grey. The midbrain sits above the pons and contains a number of structures involved in sensory reflexes, movement, and pain. For example, the periaqueductal grey of the midbrain plays an important role in the body’s management of pain because it contains receptors for endorphins, our natural opioids, discussed earlier in this chapter and in Chapter 6. When endorphins are present in the periaqueductal grey, they reduce the perception of pain by decreasing the strength of pain messages traveling to higher levels of the brain. Nearby are other cell clusters that serve as the major sources of two important chemical messengers in the brain, serotonin and norepinephrine. These structures participate in states of arousal, mood, appetite, and aggression. Running the length of the brainstem’s core from the upper medulla into the midbrain is the reticular formation, which participates in the management of levels of arousal, discussed further in Chapter 6. The cells in the reticular formation have two settings—fast and slow. When the cells are firing quickly, we usually show other signs of being awake. When the cells are firing slowly (or are damaged due to a stroke or other injury), an individual will enter either deep sleep or unconsciousness.

4-4b

Subcortical Structures Embedded within the vast tracts of nerve fibres or white matter that make up the bulk of the cerebral hemispheres are a number of subcortical structures that participate in self-awareness, learning, emotion, movement, communication, the inhibition of impulses, and the regulation of body states. We call them subcortical because they lie sub, which means “below,” the cerebral cortex, which comprises the wrinkled outermost covering of the cerebral hemispheres. Early anatomists collected some of these subcortical structures into a limbic system (limbic means “border,” and these structures form a gentle curve below the cerebral cortex), but this term is losing popularity with contemporary anatomists (Rolls, 2015). You might also have heard the limbic system called “your emotional brain.” As you will see in the next sections, some of these structures do participate in our emotional life, but they perform many other functions as well. We usually discuss subcortical structures in the singular, as in thalamus or hippocampus, but they actually are paired sets of structures, one on either side of the brain. The Thalamus Almost at the centre of the brain lies the thalamus. The thalamus is often called the gateway to the cortex, because input from most of our sensory systems (vision, hearing, touch, and taste) travels first to the thalamus, which then forwards the information to the cerebral cortex. The cortex, in turn, forms large numbers of connections with the thalamus. As described earlier in this chapter, Patient George developed synesthesia, a condition in which sensory experiences become intermingled (discussed in Chapter 5), after experiencing a thalamic stroke. In addition to its role in sensation, the thalamus is involved with memory and states of consciousness. Lesions in the thalamus are associated with profound memory loss (Cipolotti et al., 2008). As you will learn in Chapter 6, during our deepest stages of sleep, the thalamus coordinates the activity of cortical neurons, “tuning out” the outside world, making it difficult to awaken. Disturbances in the circuits linking the thalamus and the cortex accompany some seizures. The Basal Ganglia The basal ganglia consist of several large structures involved with voluntary movement that curve around to hug the thalamus (see Figure 4.14 and Figure 4.15). Some of the structures included in the basal ganglia are the caudate, putamen, globus pallidus, and nucleus accumbens. The basal ganglia form complex circuits with motor structures located in the brainstem, the thalamus, and the cerebral cortex. Degeneration of the basal ganglia occurs in Parkinson’s disease, a condition that makes the initiation of voluntary movement extremely difficult. The basal ganglia also contribute to several psychological disorders described in Chapter 14, including obsessive-compulsive disorder (OCD) and attention deficit hyperactivity disorder (ADHD). These disorders are characterized by inadequate control of voluntary movement. In the case of OCD, patients may endlessly repeat a behaviour, such as hand-washing, while in ADHD, voluntary movements can be unusually frequent, rapid, and impulsive. Figure 4.15 The Thalamus and the Basal Ganglia. Near the centre of the brain, the thalamus receives input from most of our sensory systems and relays the information to the cerebral cortex. Curving around the thalamus are the basal ganglia, which form an important part of our voluntary movement systems.

© Argosy Publishing, Inc. While the main overall function of the basal ganglia is to facilitate movement and inhibit competing movements (allowing you to, say, pick up a fragile glass ornament with enough force to hold it, but not so much force as to crush it), each of the various structures also plays important roles in other brain functions as well. For example, the nucleus accumbens, a small structure located between the caudate and putamen, plays an important role in the brain’s reward and pleasure circuitry. Whether you are eating, having sex, using addictive drugs, gambling, or simply enjoying a beautiful sunset, this circuit comes into play (Comings & Blum, 2000). The activity of the nucleus accumbens is related to a person’s sense of social inclusion. When people who have strong connections to friends and family view a happy social scene, their nucleus accumbens becomes active. In contrast, when people with weaker social connections view the same happy scenes, their nucleus accumbens shows less activity than those of the socially connected people (Cacioppo, Norris, Decety, Monteleone, & Nusbaum, 2009). The Hypothalamus The hypothalamus is involved with motivation and homeostasis (see Chapter 7), or the regulation of body functions such as temperature, thirst, hunger, biological rhythms, and sexual activities (see Figure 4.16). The hypothalamus is often described as contributing to the “4F” behaviours: feeding, fleeing, fighting, and, well, sex (fornication). The hypothalamus carries out its motivational and homeostatic tasks by directing the autonomic nervous system and the endocrine system and its hormones, which we discuss in detail later in this chapter. Figure 4.16 Other Important Subcortical Structures. Subcortical structures located under the cerebral cortex participate in attention, decision making, learning, memory, and emotion.

Argosy Publishing, Inc. The Hippocampus The hippocampus, named for its shape after the Greek word for seahorse, is essential to the formation of long-term memories, which we discuss in more detail in Chapter 9. Memories are not stored permanently in the hippocampus, but it is likely that the hippocampus is involved in the storage and retrieval of memories located elsewhere in the brain. Damage to the hippocampus results in profound impairments in the ability to form new memories, but intelligence, personality, and most memories of events that occurred before hippocampal damage remain intact. The Cingulate Cortex The cingulate cortex forms a fold of tissue on the inner surface of each cerebral hemisphere. The forward two thirds of this structure, known as the anterior cingulate cortex (ACC), participate, along with the hypothalamus, in the control of the autonomic nervous system, which we discuss later in this chapter. The ACC also plays significant roles in decision making, emotion, anticipation of reward, and empathy. The rear third, or posterior cingulate cortex (PCC), participates in memory and visual processing. The Amygdala The amygdala gets its name from the Greek word for “almond” because of its shape. One amygdala is deeply embedded in the temporal lobe, the wing of cortex that curves around the side of the brain, in each hemisphere. The amygdala receives sensory information (vision, hearing, and smell) and produces emotional and motivational output that is sent to the cerebral cortex. Although the amygdala responds to both positive and negative stimuli, it is best known for its role in identifying, remembering, and responding to fear and aggression. Research studies have found that the amygdala becomes more active when people are looking at pictures of fearful facial expressions. The more intense the expression of fear, the more activation is observed in the amygdala (Vuilleumier, Armony, Driver, & Dolan, 2001). Monkeys with damaged amygdalae approached unfamiliar monkeys boldly and fearlessly, which is uncharacteristic of these animals (Emery et al., 2001). They also failed to show their species’ typical fear of rubber snakes and unfamiliar humans (Mason, Capitanio, Machado, Mendoza, & Amaral, 2006). Researchers at the University of Montreal have found that ten-year-old children raised by mothers with long-term symptoms of depression have enlarged amygdalae, suggesting that the brain is highly responsive to early environmental experiences, although the specific causes of the enlarged amygdalae are unknown (Lupien et al., 2011). Several behavioural deficits have been identified in a patient, known as Patient S.M., who experienced a rare medical condition that damaged her amygdalae in both hemispheres (Adolphs, Tranel, & Damasio, 1998) (see Figure 4.17). Although the patient can identify facial expressions of happiness, sadness, and disgust in photographs, she has a specific difficulty identifying expressions of fear. When researchers exposed Patient S.M. to snakes, spiders, and scary movies, she showed no signs of fear (Feinstein, Adolphs, Damasio, & Tranel, 2011). Although Patient S.M. shows no signs of antisocial behaviour, other research indicates that people who harm others without feeling guilt are also impaired in their abilities to perceive fear in facial expressions or voices (Blair et al., 2002; Marsh & Blair, 2008). The condition affecting Patient S.M. does not usually begin until after the age of 10, so having functional amygdalae in childhood might have helped Patient S.M. learn to act in prosocial ways. This image is taken from a man who is blind because of damage caused by a stroke to his visual areas, indicated by the red and green arrows. The orange area indicates activation of his amygdala when he is shown a photo of an angry face. This man couldn’t tell you whether he is looking at a tree, a building, or a face, let alone a happy or angry one, but his amygdala knows and reacts appropriately.

From A. J. Pegna, et al., “Discriminating Emotional Faces Without Primary Visual Cortices Involves the Right Amygdala,” Nature Neuroscience, Jan. 2005, 8(1): 24–25. © 2005 Nature Publishing Group. Figure 4.17 Results of Damage to the Amygdala. In addition to her difficulties in responding to fear stimuli, Patient S.M. shows other behavioural deficits related to her damaged amygdalae. Personal space, although variable across cultures, is consistent within cultures. Healthy control participants in the United States stand about 0.64 metre (about 2 feet) or arm’s length away from people they don’t know. In contrast, Patient S.M. prefers to stand nearly twice as close to others, or 0.34 metre (about 1 foot). Standing closer to strangers than is normal for your culture is likely to send inappropriate social messages of either threat or intimacy.

Source: D. P. Kennedy, J. Gläscher, J. M. Tyszka, & R. Adolphs (2009). “Personal Space Regulation by the Human Amygdala,” Nature Neuroscience, 12, 1226–1227. doi: 10.1038/nn.2381.

4-4c The Cerebral Cortex Above the brainstem, we find the two large cerebral hemispheres, which are connected by a large bundle of nerve fibres known as the corpus callosum. The thin layer of cells covering the outer surface of the cerebral hemispheres is the cerebral cortex (see Figure 4.18). The cortex, which means “bark” in Latin, covers the cerebral hemispheres like the bark of a tree. Most of the remaining bulk of the hemispheres is made up of white matter, or nerve fibre pathways, that connects the cortex with other parts of the nervous system. The average 20-year-old human brain has around 162 500 kilometres (100 000 miles) of white matter (Marner, Nyengaard, Tang, & Pakkenberg, 2003). The subcortical structures discussed earlier are distributed within this white matter. Figure 4.18 The Cerebral Cortex. The cerebral cortex (cortex means “bark”) is a thin layer of cells on the outer surface of the brain. The closeup shows different views of the cortex, including the distribution of complete single cells, cell bodies, and myelin, the insulating material that covers most nerve fibres.

Enlarge Image

Argosy Publishing, Inc. If stretched out flat, the human cerebral cortex would cover an area of about 0.23 square metre (about 2.5 square feet). To fit within the confines of the skull, the cortex is convoluted or wrinkled. The degree of cortical convolution positively correlates with the general intellectual capacities of a species. For instance, human brains are more convoluted than sheep brains, which in turn are more convoluted than rat brains (see Figure 4.19). Figure 4.19 Degree of the Convolution of the Cortex Predicts Intellect. As species’ behaviour becomes more complex, we see a corresponding increase in the degree of convolution (wrinkling) of the cerebral cortex. This wrinkling of the brain permits more brain tissue to fit within the skull. As a result, cortical size has increased more quickly over the course of evolution than skull size—an important adaptation given that large skulls are difficult to get through the birth canal.

Enlarge Image

© Cengage Learning Each hemisphere of the cerebral cortex may be divided into four lobes, named after the bones of the skull that cover them (see Figure 4.20). Toward the front of the brain, we find the frontal lobe, and directly behind the frontal lobe lies the parietal lobe. At the back of the brain is the occipital lobe. Curving around the side of each hemisphere, we find the temporal lobe. Because we have two hemispheres, it follows that we have pairs of each type of lobe, usually denoted right or left (e.g., right frontal lobe and left frontal lobe). Figure 4.20 Lobes of the Cerebral Cortex. The cerebral cortex is traditionally divided into four lobes: frontal, parietal, occipital, and temporal.

Enlarge Image

Argosy Publishing, Inc. Localization of Functions in the Cerebral Cortex As we mentioned earlier in this chapter, the phrenologists were wrong in assuming that behavioural characteristics were reflected in bumps of the skull, but they were correct in suggesting that some functions were localized in particular parts of the brain. The functions performed by different areas of the cerebral cortex within the lobes fall into three categories: sensory, motor, and association. The sensory cortex processes incoming information from the sensory systems, such as vision or hearing, which we describe in Chapter 5. The primary visual cortex is in the occipital lobe, and the primary auditory cortex is in the temporal lobe. The primary somatosensory cortex (soma refers to “body”) is in the parietal lobe and processes information about touch, pain, body position, and skin temperature. The primary motor cortex is in the rearmost portion of the frontal lobe and provides the highest level of voluntary control over movement. Areas of the cortex that do not have specific sensory or motor functions are known as association cortex. Association means “connection” in this case, and association cortex helps us form bridges between sensation and action, language, and abstract thought. Association areas are distributed throughout the cortex. Diffusion tensor imaging highlights the rich arrays formed by the white matter in the brain.

Simon Fraser/Science Source The Frontal Lobe In addition to being the home of the primary motor cortex, the frontal lobe has a number of important, sophisticated cognitive functions. Adjacent to the primary motor cortex is Broca’s area, named after Paul Broca, who helped identify its functions in the 1860s. Broca’s area participates in the production of speech. Consequently, damage to Broca’s area caused by a stroke or a tumour produces considerable difficulty in speaking, although comprehension of speech remains good. The most forward portion of each frontal lobe, known as the prefrontal cortex (pre means “before”), is involved with the planning of behaviour, attention, and judgment. Executive functions refer to the range of cognitive processes that enable self-regulation and the cognitive control of behaviour, including attentional control, inhibition, planning, and self-monitoring. While these executive functions likely involve many regions of the brain, including many non-frontal regions such as the cerebellum, the frontal lobes play a particularly important role in these tasks. Abnormalities of frontal lobe activity may account for characteristics of some psychological disorders, including schizophrenia and ADHD, which we will discuss in detail in Chapter 14. The role of the frontal lobes in the planning of behaviour is illustrated by a bizarre condition known as alien hand syndrome, which occurs when connections between the prefrontal cortex and the lower areas of the brain involved in movement are damaged (Kikkert, Ribbers, & Koudstaal, 2006). This condition has no effect on sensory feedback from the limb, such as touch and position. However, patients with this condition do not seem to have control over their affected limbs and often remain unaware of the limbs’ activities until they are pointed out by another person. For instance, a hand might undo a button or remove clothing without the patient’s awareness of the activity. Patients do not recognize the rogue limb as their own and may use the other hand to wrestle with it in an attempt to control it forcibly or punish it for its activities. The importance of the frontal lobes is also illustrated by the results of a terrible accident that befell a young railroad worker named Phineas Gage in 1848. While Gage was preparing to blast through some granite, a freak accident sent an iron tamping rod through his head before it landed about 9 metres (about 30 feet) away. The rod entered his head under his left cheekbone, passed behind his left eye, and exited out the middle of his forehead. Remarkably, Gage survived, but he was not the same person as before his accident. Although outwardly normal in his intelligence, speech, and movement, Gage became prone to angry outbursts and unreliability, which made it difficult for him to find and keep employment. As his doctor noted, “His contractors, who regarded him as the most efficient and capable foreman in their employ previous to his injury, considered the change in his mind so marked that they could not give him his place again” (Fleischman, 2002, p. 20). However, it is also important to point out that in the century and a half since Gage’s accident, his story has been prone to exaggeration and distorted claims. For example, some writers have claimed that Gage mistreated his wife and children, but Gage was unmarried and childless (Macmillan, 2000). It is an important reminder that even (or perhaps especially) in cases like these, we need to use our critical thinking skills to carefully examine the claims that are being made and the evidence used to support them.

ImageShop/Cardinal/Corbis Gage suffered his accident when modern imaging technologies were nowhere to be seen on the scientific horizon. Today, we know more about why Gage behaved the way he did because of his accident. Scientists initially recreated the pathway that Gage’s tamping rod must have travelled (Damasio, Grabowski, Frank, Galaburda, & Damasio, 1994). More recent analyses show that the rod did more damage to the fibre pathways in Gage’s brain than to the cells of his cerebral cortex (Van Horn et al., 2012). The rod disrupted 11 percent of the connections in Gage’s brain, compared to 4 percent of the cells in his cortex. Phineas Gage suffered a terrible injury to his frontal lobes. Modern imaging techniques allowed scientists to recreate the pathway of Gage’s tamping iron through his brain. Remarkably, Gage survived, although his friends described him as a “changed man.”

Patrick Landmann/Science Source The rod missed areas involved in speech, voluntary movement, and the senses of touch, body position, and pain. Thus, Gage had no difficulty speaking or performing complex movements, such as driving a stagecoach, following his accident. Modern patients with damage in this area exhibit many of the same social deficits that Gage demonstrated (Damasio & Anderson, 1993; Damasio, Tranel, & Damasio, 1991; Eslinger & Damasio, 1985). They are impulsive, emotionally unstable, unpredictable, and unable to make reasonable decisions. However, there is also reason to believe that recovery of these deficits is possible. Documentation from individuals who knew Gage in Chile where he worked as a stagecoach driver provide evidence for what is known as the social recovery process, whereby someone like Gage may be able to recover these lost social skills by being placed in an structured environment (such as the one provided by stagecoach employment) where such relearning is possible (Macmillan, 2000). Knowing the outcome of the Gage case, you might be astonished to learn that physicians in the 1940s and 1950s deliberately damaged the frontal lobes of nearly 50 000 American patients in a procedure known as a frontal lobotomy. The intent of the procedure was to reduce fear and anxiety in patients with serious psychological disorders, which it accomplished in many cases but at great expense to the patient. As you might suspect from reading about Gage, many patients who underwent the procedure were unable to work or live normal lives because of their impulsive, antisocial behaviour. The orbitofrontal cortex, a part of the prefrontal cortex located just behind the bony orbits protecting the eyes, plays an important role in our emotional lives (see Figure 4.21). People with damage to the orbitofrontal cortex demonstrate dramatic deficits in their social behaviour and experience of emotion, despite retaining their intelligence, language skills, and abilities to pay attention, learn, and remember (Bechara, Damasio, & Damasio, 2000; Damasio, 1994; Damasio & Anderson, 1993). Figure 4.21 The Orbitofrontal Cortex. People with damage to their orbitofrontal cortex have difficulty controlling impulses and anticipating the negative outcomes of poor decisions. This part of the brain is one of the last areas to mature.

Argosy Publishing, Inc. Patient E.V.R. experienced orbitofrontal damage during surgery for a tumour (Eslinger & Damasio, 1985). Before his surgery, E.V.R. was considered a role model and a respected member of his community. Following his surgery, E.V.R. lost his job, went bankrupt, and divorced his wife to marry a prostitute, whom he divorced two years later. Although he had no difficulties talking about moral dilemmas, he experienced enormous problems when trying to make everyday decisions, such as buying toothpaste or choosing a restaurant. Researchers have made other connections between abnormalities in the orbitofrontal cortex and antisocial behaviour. In a sample of 21 individuals diagnosed with antisocial personality disorder, a condition characterized by disregard for others that we discuss in Chapter 14, the volume of the prefrontal cortex, which includes the orbitofrontal cortex, was about 11 percent less than in control participants who did not have the condition (Raine, Lencz, Bihrle, LaCasse, & Colletti, 2000). Individuals with antisocial personality disorder or orbitofrontal damage not only fail to anticipate the emotional consequences of situations but also are unable to delay gratification. They typically choose immediate rewards over long-term benefits, such as stealing something now despite knowing the long-term benefits of staying out of jail. The Occipital Lobe Early scientists discovered that nerves carry only one type of information. When you take a blow to the back of the head, where your primary visual cortex is found, your occipital lobe does not know how to say, “ouch.” Instead, it responds to the blow as if you saw a flash of light (not necessarily the tweeting birds or stars indicated in cartoons). The occipital lobe, located at the back of the brain, is home to the primary visual cortex. The primary visual cortex begins the process of interpreting input from the eyes by responding to basic information about an image, such as its borders, shading, colour, and movement. This amount of processing by itself does not allow you to read this page or recognize your professor in the library. Two important pathways link the occipital lobe with the rest of the brain. A pathway connecting the occipital lobe with the temporal lobe allows you to recognize objects you see. A second pathway connects the occipital lobe with the parietal lobe and allows you to process the movement of objects. We discuss these processes further in the next sections. The Temporal Lobe The temporal lobe has several areas that are specialized for particular functions. The temporal lobe is home to the primary auditory cortex, which allows us to process incoming sounds. As mentioned earlier, the temporal lobe processes some higher visual system tasks, including the recognition of objects and the faces of familiar people. Patients with damage to the temporal lobe are often unable to recognize their loved ones by sight. They must wait until the person speaks. We discuss this processing of vision and hearing by the temporal lobe in more detail in Chapter 5. We saw earlier how damage to Broca’s area in the frontal lobe produced difficulty in speaking. Damage to another language area located in the temporal lobe, Wernicke’s area, produces different results (see Figure 4.20). As we discuss in Chapter 10, patients with damage to their Wernicke’s area speak fluently but make no sense. They cannot comprehend speech, but they seem blissfully unaware of their deficits. The Parietal Lobe The parietal lobe is home to the primary somatosensory cortex, which helps us localize touch, pain, skin temperature, and body position. Damage to the parietal lobe can produce the odd symptoms of neglect syndrome. Patients with this condition have difficulty perceiving part of their body or part of the visual field (see Figure 4.22). Figure 4.22 Damage to the Parietal Lobe Causes Neglect. Patients with certain types of brain damage in the parietal lobe experience “hemispatial neglect,” or the inability to pay attention to stimuli located in space on the opposite side relative to their damage. They seem unaware that there is anything wrong with their perceptions. The patient whose drawings are featured in this figure experienced damage to the right hemisphere, resulting in neglect for anything to the left in space. Neglect does not affect vision alone but can also affect the sense of body location. One patient was unable to recognize his own leg and suspected the hospital staff of putting a cadaver leg into his bed.

© Cengage Learning The parietal lobe processes input about taste and, like the temporal lobe, it engages in some complex processing of vision. Whereas the temporal lobe participates in visual recognition, the parietal lobe tells us how quickly something is moving toward us. This can be an essential bit of information when deciding whether it is safe to make a left turn in front of oncoming traffic. We discuss these functions of the parietal lobe in depth in Chapter 5. Mirror Neurons In the early 1990s, Giacomo Rizzolatti and a team of Italian scientists were busy studying the brain correlates of movement when they noticed something odd (Di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992). They had observed that certain neurons in a part of a monkey’s brain became especially active when the monkey performed certain actions, like reaching for a piece of banana or a peanut. When an experimenter picked up a piece of food to place it within the monkey’s reach, some of the same neurons began to fire. Suspecting something important was behind these observations, the researchers began to study these “mirror neurons” more carefully. The scientists believed that mirror neurons provided a mechanism for understanding the actions of others (Caggiano et al., 2011). Psychology Takes on Real-World Problems The Problem of Procrastination If you are like most People, you probably sometimes procrastinate—putting off an important task until the last minute. However, some people are more likely to procrastinate than others, and chronic procrastination has been linked to wide range of negative outcomes including lower grades, higher stress, and increased illness among university students (e.g., Tice & Baumeister, 1997). Although some have argued that procrastination is not really that harmful, as long as the tasks eventually get done (and many people who procrastinate will claim that they “work better under pressure”), decades of psychological research fail to support such claims, and instead point to the conclusion that procrastination is ultimately a self-destructive pattern of behaviour (e.g., Ferrari & Tice, 2000). Given this finding, researchers have sought to understand why certain people seem more prone to procrastination than others, and to develop interventions that can help mitigate procrastination and its negative consequences. Timothy Pychyl and his Procrastination Research Group at Carlton University in Ottawa have spent decades studying procrastination, which they describe as an “amygdala hijack.” According to Pychyl and others, procrastination is not a problem with time management, as is often believed, but rather stems from difficulties with emotional control and regulation. As described previously, the amygdala plays an important role in emotional processing and assessing the potential threatening nature of stimuli. However, the amygdala is not the only region of the brain thought to play a role in procrastination. As we have learned, areas of brain such as the anterior cingulate cortex (ACC) and prefrontal cortex are involved in the execution of goal-directed behaviours, so it is no surprise that procrastination may involve these regions of the brain as well. Indeed, research has found that academic procrastination correlates with numerous self-reported measures of executive function, including impulsivity, task initiation, self-monitoring, and planning and organization (Rabin, Fogel, & Nutter-Upham, 2011). To put it simply, whenever we face a situation where we can either choose to initiate a challenging and potentially frustrating task (e.g., getting started on a ten-page essay) or to delay (e.g., spending an hour on social media), our emotional brain goes to battle with our more rational prefrontal cortex. Chronic procrastinators engage in procrastination not only to feel better in the moment, but also because they falsely believe that they will be better emotionally equipped to deal with the challenging task in the future (Sirois & Pychyl, 2013). One brain imaging study compared the brains of procrastinators versus non-procrastinators found that the procrastinators had larger amygdalae, as well as reduced functional connectivity between the amygdala and the dorsal anterior cingulate cortex (dACC; Schlüter et al., 2018). The dACC is connected with the prefrontal cortex and is known to be involved in various aspects of self-control, as well as for the top-down regulation of the amygdala.

Universal Images Group/Getty Images It is important to remember that biology is not destiny, and that the identification of brain differences in chronic procrastinators does not mean that there is nothing they can do to prevent procrastination. Our brains are plastic and capable of change. One promising way for procrastinators to practise emotion regulation is through the learning of mindfulness meditation. A number of studies have shown that even short-term mindfulness meditation practice can lead to decreased amygdala activity and improved connectivity between the amygdala and prefrontal regions of the brain (Gotnik, Meijboom, Vernooij, Smits, & Hunink, 2016). Other evidence-based strategies that can help reduce the likelihood of procrastination include breaking down larger tasks (like that ten-page essay) into smaller, more manageable, and less aversive assignments (Rabin et al., 2011) and setting meaningful self-imposed deadlines for tasks, especially when external deadlines are not present (Ariely & Wertenbroch, 2002). For example, if you have a ten-page paper due at the end of the term, this may only have a single final deadline. To help avoid procrastination, you might break the paper up into smaller sections, and then assign yourself a deadline for each section (ensuring that you will incur some sort of penalty for not meeting each deadline). Importantly, people need to forgive themselves when procrastination happens. Research has shown that when students are able to forgive themselves for procrastinating on an important task (studying for their first introductory psychology midterm), they engage in less procrastination on a future task (studying for their second psychology midterm). So, don’t beat yourself up if you fall off the procrastination wagon! Lingering negative feelings, such disappointment with the outcome of a test, do nothing to help prepare for future assignments. Self-forgiveness reduces the likelihood of future procrastination by reducing negative affect (Wohl, Pychyl, & Bennett, 2010). For more information on procrastination you may want to check out Tim Pychyl’s podcast iProcrastinate or his Procrastination Research Group website: https://www.procrastination.ca. Do human beings, as well as monkeys, possess mirror neurons? Research on humans typically involves brain imaging, such as functional magnetic resonance imaging (fMRI), so we see the activity of larger areas of the brain (mirror “systems”) rather than single neurons. It does appear, though, that human beings possess mirror systems that help us understand not just the actions and emotions of others, but their intentions as well (Iacoboni et al., 2005). Some researchers have suggested a connection between the mirror neuron system (MNS) and the deficits in social decision making exhibited by individuals with autism spectrum disorder (Khalil et al., 2018). Initially, researchers looked for mirror systems in the vicinity of the motor cortex in the frontal lobe, but other areas of the brain seem to have similar mirror capacities, allowing us to relate to feelings of disgust and of being touched expressed by others (Keysers et al., 2004; Wicker et al., 2003). Right Brain and Left Brain A special type of localization of function in the cerebral cortex is known as lateralization, or the localization of a function in either the right or the left cerebral hemisphere. A basic type of lateralization occurs in the somatosensory and voluntary motor systems in the brain. Movement and sensation on the right side of the body are processed by the left hemisphere, and movement and sensation on the left side of the body by the right hemisphere. If you observe a person who is paralyzed on the right side of the body, you can be fairly certain that this paralysis is a result of motor cortex damage in the left hemisphere. In a similar manner, the visual cortex of the left hemisphere processes all data from the right half of the visual field, while the right hemisphere processes all data from the left half of the visual field. In other words, when you look straight ahead, holding your eyes and head still, everything to the right or left of centre is processed by the opposite hemisphere. Much of our knowledge of lateralization of the human brain resulted from the careful analysis of a surgical procedure known as a split-brain operation (see Figure 4.23). To treat rare cases of life-threatening seizures, surgeons in the 1960s cut the patients’ corpus callosum and other pathways connecting the right and left cerebral hemispheres (Bogen, Schultz, & Vogel, 1988). The procedure not only succeeded in reducing or eliminating seizures, but it also produced no changes in personality, intelligence, or speech. Only when patients were studied in the laboratory were the consequences of their surgery evident. A typical experiment demonstrating the differences in processing by the right and left hemispheres (Gazzaniga, 1967) is illustrated in Figure 4.24. Figure 4.23 The Split-Brain Operation. To save patients from life-threatening seizures, physicians cut the corpus callosum, a large band of nerve fibres connecting the right and left hemispheres.

Argosy Publishing, Inc. Figure 4.24 The Hemispheres have Different Capacities for Language Functions. If participants fixate on a dot in the middle of the screen, information to the left of the dot is processed by the right hemisphere, and information to the right of the dot is processed by the left hemisphere. When asked verbally what word was seen, participants answered “art,” which was seen by the verbal left hemisphere. When asked to point with the left hand, which is controlled by the right hemisphere, to the word that was seen, the participants pointed to “he,” which is the word seen by the right hemisphere.

Enlarge Image

Argosy Publishing, Inc. Subsequent research indicated that language, for most people, is lateralized to the left hemisphere, although a minority of individuals process language either in the right hemisphere or in both hemispheres (Rasmussen & Milner, 1977). The lateralization of language is correlated, although not completely, with a person’s handedness. As shown in later in the chapter, nearly all right-handed people lateralize language to the left hemisphere, as do about 70 percent of people who are left-handed. The remaining individuals process language either in the right hemisphere or in both hemispheres. Language is not the only cognitive process to show evidence of lateralization. Other suspected lateralized processes include mathematical computation and logical reasoning (left hemisphere) and some music functions, spatial information, intuition, and the visual arts (right hemisphere). Emotional behaviour also appears to be lateralized. In most people, activity of the left hemisphere is correlated with positive emotions, whereas activity in the right hemisphere is correlated with negative emotions, providing the cortex with a rough distinction between approaching positive things (left hemisphere activity) and avoiding negative things (right hemisphere activity) (Davidson & Irwin, 1999). Anesthetizing the left hemisphere results in temporary feelings of depression, while anesthetizing the right hemisphere produces happiness (Lee et al., 2004). Right–Left Brain Myths A word of caution is in order here. As noted by Roger Sperry, who won the 1981 Nobel Prize for his investigations of lateralization in the human brain, “The left–right dichotomy in cognitive mode is an idea with which it is very easy to run wild” (Sperry, 1982, p. 1225). Most of us have not undergone a split-brain procedure. Our intact corpus callosum and other connections between the two cerebral hemispheres allow information to pass rapidly from one hemisphere to the other. Suggestions that you can improve your artistic or athletic talent or reduce inattention by “learning to access your right brain” have gained considerable attention in the popular press, but they do not hold up to scrutiny in the laboratory. One of the most popular myths about lateralization is the idea that individual differences in artistic talent or logical thinking correlate with a person’s dominant hemisphere. Hemisphere dominance, as measured by the relative size of the hemispheres and the localization of language and handedness, does not predict occupational choice or artistic talent (Springer & Deutsch, 1998). For most people, activity in the left hemisphere is correlated with positive emotions while activity in the right hemisphere is correlated with more negative emotions.

Enlarge Image

Yuri Arcurs/ Shutterstock.com; The Science Picture Company/Alamy Stock Photo The Function of Lateralization What are the advantages of lateralization? Most species of animals show a preference for one hand or the other, such as when a cat reaches for its prey (Cole, 1955; Holder, 1999). Lateralization might provide organisms with the ability to multitask (Rogers, 2000). Chicks raised in the dark fail to lateralize visually guided responses normally and are at a disadvantage compared to normal chicks when feeding and watching for predators simultaneously. Success in this type of multitasking has obvious survival advantages. Research conducted with chicks and other species also supports the notion that lateralization is not just good for individuals, but that shared patterns of asymmetries within populations may help with social cohesion (Halpern, Güntürkün, Hopkins, & Rogers, 2005). Experiencing Psychology Handedness As shown in Table 4.3, Lateralization of language is correlated with handedness (Milner, 1974). Handedness represents a continuum, with some people being nearly ambidextrous and others having strong preferences for using one hand or the other. Although most of us would have no trouble answering a question asking whether we are right- or left-handed, researchers like Milner must apply systems for determining a person’s handedness. One of the frequently used instruments follows. Table 4.3 Relationships Between Handedness and Language Localization Handedness Language left Language right Mixed dominance Right-handed (90%) 96% 4% 0% Left-handed (10%) 70% 15% 15% Enlarge Table

Simply read each of the questions in Table 4.4. Decide which hand you use for each activity, and then circle the answer that describes you the best. If you are unsure of any answer, try to act out the action. To find your score, count the number of circled “right” answers and subtract the number of circled “left” answers. Ignore “either” answers. Ambidextrous people will score around 0, very right-handed people will score near +12, and very left-handed people will score near –12. Table 4.4 The Lateral Preference Inventory 1. With which hand do you draw? Left Right Either 2. Which hand would you use to throw a ball to hit a target? Left Right Either 3. In which hand would you use an eraser on paper? Left Right Either 4. Which hand removes the top card when you are dealing from a deck? Left Right Either 5. With which hand do you normally write? Left Right Either 6. In which hand do you use your racquet for tennis, squash, etc.? Left Right Either 7. With which hand do you use your toothbrush? Left Right Either 8. Which hand holds a knife when you are cutting things? Left Right Either 9. Which hand holds the hammer when you are driving a nail? Left Right Either 10. In which hand would you hold a match to strike it? Left Right Either 11. Which hand holds the thread when you are threading a needle? Left Right Either 12. In which hand would you use a fly swatter?

 Enlarge Table 

© Cengage Learning Source: Adapted from “The Lateral Preference Inventory for Measurement of Handedness, Footedness, Eyedness, and Earedness: Norms for Young Adults,” by S. Coren, 1993, Bulletin of the Psychonomic Society, 31(1), pp. 1–3. Although this test can’t tell you which hemisphere you use for language, your odds of using your left hemisphere for language if you’re very right-handed are quite high. Most figure skaters are right-handed and prefer to spin and jump in a counter-clockwise direction (as depicted in the image on the left). However, a small minority of skaters, many of whom are left-handed, prefer to spin and jump in a clockwise direction (such as Canadian skater Kaetlyn Osmond). While spinning and jumping in either direction is acceptable, it can lead to difficulties when skaters are on the ice warming up together—the left-handed skaters will be skating the opposite direction of everyone else!

edpa picture alliance/Alamy Stock Photo; ITAR-TASS News Agency/Alamy Stock Photo Human lateralization of brain structures might have made language possible (Berlim, Mattevi, Belmonte-de-Abreu, & Crow, 2003). This development could have a big price tag, however, because lateralization might account for our species’ vulnerabilities for schizophrenia, as discussed in Chapter 14. People with schizophrenia show abnormal hemisphere lateralization and are more likely to be left-handed or to have ambiguous handedness (Berlim et al., 2003). Humans are not the only animals to have a preferred hand. However, other primates, such as this chimpanzee using a stone to open nuts, are equally likely to be right- or left-handed. In contrast, more than 90 percent of humans are right-handed.

Martin Harvey/Alamy Stock Photo Summary 4.2 Structures of the Central Nervous System

Structure
What to remember
   

Argosy Publishing, Inc. Spinal cord * Continuous with brainstem * Large white matter pathways * Reflexes

4-5 The Peripheral Nervous System (PNS) and the Endocrine System The brain and spinal cord are spectacular processing units, but without input or ability to implement commands, they would be no different from your computer’s central processing unit (CPU) without its mouse, keyboard, monitor, printer, and Internet connection. The lights may be on, but not much is going to happen. In this section, we will explore the structures of the PNS that provide these essential input and output functions in the body. The PNS can be separated into two divisions: the somatic nervous system and the autonomic nervous system. The somatic nervous system includes the peripheral portions of the sensory and voluntary movement systems. The autonomic nervous system is responsible for the actions of many glands and organs. Additional output is provided by the endocrine system, through which the CNS can communicate with the body by releasing chemical messengers into the bloodstream. These systems coordinate their efforts to produce consistent patterns of movement, hormone release, and arousal.

4-5a The Somatic Nervous System The somatic nervous system is the part of the PNS that transmits commands for voluntary movement from the CNS to the muscles and brings sensory input back to the CNS for further processing. These functions are carried out by the 31 pairs of spinal nerves serving the torso and limbs and the 12 pairs of cranial nerves serving the head, neck, and some internal organs (see Figure 4.25). Figure 4.25 The Cranial Nerves. Twelve pairs of cranial nerves carry sensory and motor information from the brain to the head, the neck, and some internal organs.

Argosy Publishing, Inc.

4-5b

The Autonomic Nervous System The function of the autonomic nervous system is the control of tissues other than the skeletal muscle (Langley, 1921)—in other words, our glands and organs. The term autonomic has the same root as the word autonomy, or independence. You might think of this system as the cruise control of the body because it ensures that your heart keeps beating and your lungs continue to inhale and exhale without your conscious direction. The autonomic nervous system contains three subdivisions: the sympathetic, the parasympathetic, and the enteric. The sympathetic and parasympathetic divisions are active under different circumstances. The sympathetic nervous system prepares the body for situations requiring the expenditure of energy, while the parasympathetic nervous system directs the storage of energy. You have probably experienced intense sympathetic arousal, perhaps because of a close call on the highway. In the aroused state produced by the sympathetic nervous system, our hearts race, we breathe rapidly, our faces become pale, and our palms sweat. All these activities are designed to provide the muscles with the resources they need for a fight-or-flight reaction. The sympathetic nervous system is important to our understanding of stress, described further in Chapter 16. In contrast, the parasympathetic nervous system controls the glands and organs at times of relative calm. Instead of using up energy like the sympathetic nervous system does, the parasympathetic nervous system allows you to store nutrients, repair your body, and return the activities of internal organs to baseline levels. The responses of the internal organs to environmental stimuli reflect a sophisticated combination of inputs from both the sympathetic and the parasympathetic nervous systems (Berntson, Cacioppo, & Quigley, 1991). These systems usually have antagonistic effects on the organs they serve and are designed to alternate their activities (see Figure 4.26). We cannot be simultaneously relaxed and aroused. The sympathetic nervous system dilates the pupils of the eye, whereas the parasympathetic nervous system constricts the pupils. The heart responds to sympathetic commands by beating faster but responds to parasympathetic commands by slowing down. The two divisions do manage, however, to cooperate during sexual activity. Stress activates the sympathetic nervous system, preparing the body for fight or flight or, in this case, fast paddling.

Kurt Jones Figure 4.26 The Autonomic Nervous System. The sympathetic nervous system (left) usually has the opposite effect on an organ compared with the parasympathetic nervous system (right). For example, sympathetic input tells the heart to beat faster, while parasympathetic input tells the heart to slow down. However, both systems cooperate during sex.

Enlarge Image

Argosy Publishing, Inc. Biofeedback training helps people gain conscious control over some autonomic processes that normally run in the background. People who suffer from migraine headaches can be trained to reduce blood flow to the brain. The enteric nervous system, shown in Figure 4.27, consists of nerve cells embedded in the lining of the gastrointestinal system. This system is often called a “second brain” because it contains as many nerve cells as are found in the spinal cord. The enteric nervous system communicates with the endocrine system, described later in this chapter, to ensure the release of chemicals essential to digestion. Some functions of the enteric nervous system result in conscious perceptions, such as gastrointestinal pain, hunger, and satiety (fullness), while others operate below the threshold of conscious awareness. The latter give rise to our references to having a “gut feeling.” Disturbances of the enteric environment might contribute to the development of autism spectrum disorder (Slattery, MacFabe, Kahler, & Frye, 2016). The enteric nervous system is the source of 95 percent of the body’s serotonin, a neurochemical discussed earlier in this chapter. Individuals with autism spectrum disorder (see Chapter 14) show higher than normal levels of serotonin in their blood (Janusonis, 2008) and often experience gastric distress (Kazek et al., 2013; Israelyan & Margolis, 2018). Figure 4.27 The Enteric Nervous System. The enteric nervous system is often called a second brain. It has about the same number of neurons as the spinal cord, or about as many found in the entire brain of an adult cat.

© Cengage Learning

4-5c The Endocrine System The nervous system communicates by passing messages along nerves. In contrast, the endocrine system is made up of glands that release chemical messengers known as hormones into the blood (see Figure 4.28). These chemicals are often identical to those used by one nerve cell to communicate with another, but their actions affect more distant cells in a coordinated fashion. Ultimately, the endocrine system responds to input from the nervous system and from the hypothalamus in particular. The endocrine system is especially involved with arousal, metabolism, growth, and sex. Among the important glands of the endocrine system are the pineal gland, the pituitary gland, the thyroid gland, the adrenal glands, the islets of Langerhans, and the ovaries in females and testes in males. Figure 4.28 Glands of the Endocrine System. The endocrine system communicates with other body tissues by releasing hormones from glands into the bloodstream.

Enlarge Image

Argosy Publishing, Inc. The pineal gland, and its release of the chemical messenger melatonin, is important in the maintenance of our sleep–wake cycles, which we discuss in Chapter 6. Although not an officially approved and tested medication, melatonin is used by some travellers to offset the unpleasant effects of jet lag. Melatonin is normally released in the early evening, and it breaks down in the presence of light. Thus, exposure to artificial light at night can have negative implications for our health. For example, higher rates of cancer among people working night shifts in hospitals have been attributed to the breakdown of melatonin by light (Dopfel, Schulmeister, & Schernhammer, 2007). Human growth hormone (HGH), released normally by the pituitary gland, has become a popular substance among actors and athletes. Alex Rodriguez (also known as “A-Rod”), who played with the New York Yankees from 2004 to 2016, admitted using HGH to the U.S. Drug Enforcement Administration in 2013, though he has never publicly acknowledged his use of performing-enhancing drugs.

Enlarge Image

UPI/Alamy Stock Photo The pituitary gland, located just above the roof of your mouth, is often called the body’s master gland, because many of the hormones it releases activate other glands. The pituitary in turn is regulated by the hypothalamus, which lies directly above it. The pituitary hormones form two groups. One group, including oxytocin, vasopressin, and human growth hormone (HGH), is released directly from the pituitary. The second group consists of hormones that influence the release of hormones by other glands. Oxytocin and vasopressin participate in several important physical functions, such as breastfeeding and maintenance of fluid levels, respectively. However, these hormones are also implicated in cooperation and trust, memory for social information, recognition of emotions, and resilience during stress (Meyer-Lindenberg, Domes, Kirsch, & Heinrichs, 2011). Growth hormone stimulates growth and regeneration, making it a popular performance-enhancing substance used illegally by some elite athletes. Other pituitary hormones control the production and release of sex hormones by the ovaries and the testes, initiating puberty and maintaining fertility. In response to pituitary hormones, the thyroid gland—located just below your larynx, or voice box, in your throat—raises or lowers your rate of metabolism, or the chemical processes your body needs to sustain life. Low levels of thyroid can mimic the symptoms of depression, described further in Chapter 14. At times of stress, pituitary hormones activate the adrenal glands, which are located just above the kidneys in the lower back. In response, the adrenal glands release other hormones, including cortisol, that travel throughout the body and the brain to provide a general wake-up message. The islets of Langerhans, located in the pancreas, produce hormones essential to digestion, including insulin. Oxytocin released by the pituitary gland is correlated with human bonding between parent and child and between romantic partners.

Gino Santa Maria/ Shutterstock.com Diverse Voices in Psychology Sex and Gender Bias in Neuroscience Research In order to learn about the nervous and endocrine systems, researchers have often relied on the use of animal models. For example, animal models of hydrocephalus (discussed earlier in this chapter), have contributed greatly to our understanding of the causes underlying this condition and potential treatment options (Di Curzio, 2018). However, the vast majority of animal research conducted in psychology and related fields is limited by the fact that this research has relied almost exclusively on the use of male animals. One reason for this is to simply eliminate sex as a variable by holding it constant. Another reason often given for the overreliance on male subjects is that female rats and mice have reproductive hormone cycles that would create more variability among groups of females as opposed to groups of males. However, research has indicated that there is no merit to this claim, and that female rats are not more variable than male rats (Becker, Prendergast, & Liang, 2016; Beery, 2018). This overreliance on males is not just limited to animal subjects. Historically, it has been assumed that men and women only differ in respect to their reproductive organs, and that any results based on research with male participants would simply transfer over to women as well. This faulty assumption led to the widespread exclusion of females from clinical trials, particularly early stage trials where it was feared that a lack of knowledge about the effects of the treatment could potentially harm an unborn child, should a female participant become pregnant. However, in the 1990s, both the United States and Canadian governments began issuing guidelines promoting the inclusion of women in all stages of clinical research, recognizing the problems inherent in a system that systematically excluded women from research. The biological systems of males and females differ in many ways beyond reproductive capabilities (e.g., Chapter 16 discusses sex differences in neuroendocrine responses to stress). Rather than ignoring sex, it has been increasingly recognized that clinical trials should explicitly examine sex as a variable, so that we may know whether treatment outcomes differ for men and women. For example, in 2013 it was discovered that women take twice as long as men to metabolize the widely used insomnia drug zolpidem (e.g., Ambien), which meant that women who were taking the standard dose of the drug were much more susceptible to impairment in activities requiring mental alertness the morning after taking the drug. And this is not an isolated case—most drugs that are withdrawn from the U.S. market have greater health risks for women than men (U.S. General Accounting Office, 2001). In 2015, the U.S. National Institute of Health (NIH) instituted a guideline that strongly encouraged all NIH-funded clinical research to account for sex as a biological variable (NIH, 2015). Similar guidelines have been made in Canada, where Health Canada states that a therapeutic product must be evaluated in subjects “representative of the full range of persons” likely to receive the product before it goes to market, including women of child-bearing potential, women who are not of child-bearing potential, as well as pregnant and breastfeeding women (Government of Canada, 2013). Although the number of female participants in clinical trials has improved over time, the male bias in animal and cell research still persists. It is also still the case that many researchers do not examine the effects of sex and/or gender when analyzing their data (though female researchers are more likely to do this than their male counterparts; Nielsen, Andersen, Schiebinger, & Schneider, 2017). While biological sex differences are important to consider, gender (a social construct consisting the social attitudes and behaviours associated with being a man or woman) is also an important consideration. For example, one meta-analysis has found that individuals who consider themselves more masculine exhibit higher pain thresholds than those who consider themselves less masculine (Alabas, Tashani, Tabasam, & Johnson, 2012). Unfortunately, many biomedical researchers still believe that it is not important to consider sex or gender in experimental design (Woitowich & Woodruff, 2019). Hopefully, as a new generation of researchers receives better training on such matters, these numbers will decline, and we will see more adherence to the inclusivity guidelines set forth by government funding agencies.

© iStockphoto.com/Georgejason Summary 4.3 Peripheral Nervous and Endocrine Systems

Structure
What to remember
   

Argosy Publishing, Inc. Somatic nervous system * Sensation and movement * 12 pairs of cranial nerves * 31 pairs of spinal nerves

Argosy Publishing, Inc. Autonomic nervous system * Sympathetic nervous system: arousal and fight or flight * Parasympathetic nervous system: rest and repair * Enteric nervous system: control of the gastrointestinal system

Argosy Publishing, Inc. Endocrine system * Metabolism, arousal, growth, sex * Glands * Hormones

4-ch summary Chapter Summary In this chapter we have reviewed the basic biological structures and functions that underlie the variety of mental processes and behaviours examined throughout this book. While biological psychologists are directly focused on understanding the biological underpinnings of human and animal behaviour, it is important to keep in mind that every thought we have, attitude we hold, memory we recall, and feeling we experience is ultimately the result of neurons firing in our brain. Thus, the biological perspective is essential to obtaining a complete picture of any psychological process. Neurons and glia are the building blocks of the nervous system, the body’s electrochemical communication circuitry. Neurons communicate with each other by generating action potentials, which cause the release of neurotransmitters. Different neurotransmitters have different effects on neighbouring neurons, and drugs work by affecting endogenous neurotransmitter systems. The brain and spinal cord make up the central nervous system, which coordinates the activities of the body. The spinal cord enables the exchange of information between the brain and body. The brain is responsible for everything from keeping our hearts beating to solving space fractional diffusion equations. While most psychological phenomena involve the coordinated actions of many brain regions, we also know that certain areas of the brain play particularly important roles in certain activities (e.g., the hippocampus and the storage of memories). The peripheral nervous system is divided into the somatic nervous system, which is responsible for voluntary motor control and collecting sensory information from the external world, and the autonomic nervous system, which regulates involuntary bodily functions such as digestion and breathing. Finally, the endocrine system is responsible for the release of hormones throughout the body. As we will see, it is the coordinated actions of the endocrine and nervous systems that underlie many important activities, from sleep (Chapter 6), to sex (Chapter 7), to stress (Chapter 16).

4-key terms Key Terms The Language of Psychological Science Be sure that you can define these terms and use them correctly. * action potential * Agonists * amygdala * antagonists * autonomic nervous system * axons * basal ganglia * brainstem * cell body * central nervous system (CNS) * cerebellum * cerebral cortex * cingulate cortex * corpus callosum * dendrites * endocrine system * enteric nervous system * executive functions * frontal lobe * glia * hippocampus * hypothalamus * medulla * midbrain * myelin * neurogenesis * neurons * neuroplasticity * neurotransmitters * nucleus accumbens * occipital lobe * orbitofrontal cortex * parasympathetic nervous system * parietal lobe * peripheral nervous system (PNS) * pons * prefrontal cortex * Receptors * resting potential * reticular formation * reuptake * somatic nervous system * spinal cord * sympathetic nervous system * synapse * temporal lobe * thalamus

ch 5 intro Chapter Introduction The rods and cones in the retina begin the process of interpreting the light energy that enters the eye.

Enlarge Image

Argosy Publishing, Inc. Learning Objectives 1. Explain the basic concepts of sensation and perception, including transduction of stimuli into neural signals, distinctions between bottom-up and top-down perceptual processing, thresholds, and measurement. 2. Identify the process by which the physical structures of the eye transduce light waves into neural signals, producing the sense of vision. 3. Summarize the processes responsible for colour vision, object recognition, and depth perception. 4. Describe the process by which physical structures of the ear transduce sound waves into neural signals, producing perception of pitch, loudness, and spatial location in hearing. 5. Explain the mechanisms by which the somatosensory and chemical sense systems produce perception of body position, touch, skin temperature, pain, smell, and taste. 6. Analyze the causes of various individual differences in perception, including development and culture, in terms of biology, experience, and their interaction. We like to think we understand reality. After all, we can see, hear, touch, smell, and taste it. We don’t live in some science fiction universe where things are not how they appear. Or do we? The human eye can see many different colours, but what does it mean to “see” a colour? Is colour something that is a fixed quality of an object? Is the sky really blue? Is an apple really red? The answer, which may surprise you, is “no.” Colour is not a property of light or objects that reflect light—the colours that you perceive are a construction of your brain. As we review each of the senses in this chapter, we will zoom in to the particular types of receptors, such as the photoreceptors depicted here, that are responsible for taking information from the world and translating it into neural signals that your brain can understand. We will then zoom out to explore how numerous factors can affect how your brain actually perceives or interprets that information. For example, even though two people might be staring at the very same image, they may come to very different interpretations of what they are seeing. Consider the image of the blue/black or white/gold dress that became an Internet sensation in February 2015. A friend of a Scottish bride posted the dress worn by the bride’s mother on her Tumblr blog, leading to a discussion that engaged everyone from Justin Bieber to esteemed neuroscientists. Why do people see this photo so differently? Neuroscientists disagree about why the dress produced such different responses. The Journal of Vision prepared an entire issue (“A Dress Rehearsal for Vision Science”) devoted to explaining the dress phenomenon. A survey of 1400 people found that 57 percent described the dress as blue/black (which is correct), 30 percent as white/gold, 11 percent as blue/brown, and 2 percent as something else (Lafer-Sousa, Hermann, & Conway, 2015). Older individuals and women were more likely to choose white/gold. These researchers believe that people choose dress colours based on their expectations regarding the lighting. If you think the dress is seen in daylight, you make different conclusions than if you think the dress is seen under artificial light. “The Dress” became an Internet phenomenon as people debated its true colours. This cartoon by Randall Munroe highlights how the same dress (the dress is identical in both images) appears different when it is illuminated under blue/dim lighting versus white/bright lighting conditions.

buran_4ik/ Shutterstock.com Other scientists believe there is something special about the colour blue due to our considerable experience with natural lighting (Winkler, Spillmann, Werner, & Webster, 2015). Because indirect lighting and shadows are usually blue, participants are more likely to confuse blue objects with blue lighting. If you assume the light falling on the dress is somewhat blue, you will probably see it as white. As you’ll see in this chapter, we construct models of reality from the information obtained through our senses. We like to think that we are aware of the world around us, and it is unsettling to realize that the world might be different from the representations of reality formed by the human mind. You will learn how the models built by the human mind have promoted our survival over many generations. Our models of reality are distinct from those built by the minds of other animals, whose survival depends on obtaining different types of information from their environments.

5-1 How Does Sensation Lead to Perception? Our bodies are bombarded with information during wakefulness and sleep. This information takes many forms, from the electromagnetic energy of the sun to vibrations in the air to molecules dissolved in saliva on our tongues. The process of sensation brings information to the brain that arises in the reality outside our bodies, like a beautiful sunset, or originates from within, like an upset stomach. Sensory systems have been shaped by natural selection, described in Chapter 3, to provide information that enhances survival within a particular niche. We sense a uniquely human reality, and one that is not shared by other animals. Your dog howls seconds before you hear the siren from an approaching ambulance because the dog’s hearing is better than yours for high-pitched sounds. Horses bolt at the slightest provocation, but they may be reacting to the vibration of an approaching car or an animal that they sense through their front hooves, a source of information that is not available to the rider. Some animals sense light energy outside the human visible spectrum. Insects can see ultraviolet light, and some snakes use infrared energy to detect their prey. Differences in sensation do occur from person to person, such as the need to wear corrective glasses or not, but they are relatively subtle. However, once we move from the process of sensation to that of perception, or the interpretation of sensory input, individual differences become more evident. For example, individuals rooting for different sports teams may come to different conclusions about the fairness of a game (as discussed in Chapter 13). Everyone watching the game sensed similar information, but each person’s perceptions are unique. For individuals with synesthesia, a condition where the stimulation of one sensory pathway leads to the simultaneous and automatic stimulation of another sensory pathway, the same sensory input can lead to dramatic differences in perception. Individuals with synesthesia may always see letters as specific colours (grapheme-colour synesthesia), taste words (lexical-gustatory synesthesia), or perceive non-visual stimuli (e.g., sounds) as specific colours (chromesthesia). As mentioned in Chapter 4, the vast majority of individuals with synesthesia are born with it, and many do not even realize that their perceptions are unusual until they realize other people do not share their same experiences. For most people, identifying the triangle of 2s in the image on the left is difficult to do (particularly when the image is only presented for one second). However, for someone with synesthesia who sees 5s and 2s as different colours (such as red and green shown here), identifying the triangle of 2s is an easy task.

From Ramachandran, V. S., & Hubbard, E. M. (2001). Synaesthesia—a window into perception, thought and language. Journal of Consciousness Studies, 8(12), 3-34. Reprinted by permission of the publisher.

5-1a Sensory Information Travels to the Brain Sensation begins with the interaction between a physical stimulus and our biological sensory systems. A stimulus is anything that elicits a reaction from our sensory systems. For example, we react to light energy that falls within our visual range, as we will see later in this chapter, but we cannot see light energy that falls outside that range, such as the microwaves that cook our dinner or the ultraviolet waves that harm our skin (see Figure 5.1). Figure 5.1 All Species Experience an Adaptive Reality. Humans see only a small part of the electromagnetic energy emitted from the Sun. Some animals see even less. Dogs apparently do fine seeing blues, yellows, and greys, whereas humans have evolved to see a more colourful world. The dog’s view of the world is simulated in the photo on the right.

Enlarge Image

© Cengage Learning Before you can use information from your senses, it must be translated into a form the nervous system can understand. This process of translation from stimulus to neural signal is known as transduction. You might think of sensory transduction as being similar to the processing of information by your computer. Modern computers transduce a variety of inputs, including voice, keyboard, mouse clicks, and touch, into a programming language for further processing.

5-1b The Brain Constructs Perceptions from Sensory Information Once information from the sensory systems has been transduced into neural signals and sent to the brain, the process of perception, or the interpretation of the sensory information, begins. Perception allows us to organize, recognize, and use the information provided by the senses. If you think about the most memorable advertisements you have seen lately on television or online, it is likely that they share the features of attention-getting stimuli: novelty (we don’t see talking geckos every day), change (rapid movement, use of changing colours, and the dreaded pop-up), and intensity (the sound is often louder than the program you’re watching). An important gateway to perception is the process of attention, defined as a narrow focus of consciousness. As we discuss in Chapters 6, 9, and 10, attention often determines which features of the environment influence our subsequent thoughts and behaviours. Which stimuli are likely to grab our attention? Unfamiliar, changing, or high-intensity stimuli often affect our survival and have a high priority for our attention. Unfamiliar stimuli in our ancestors’ environment might have meant a new source of danger (an unknown predator) or a new source of food (an unfamiliar fruit) that warranted additional investigation. Our sensory systems are particularly sensitive to change in the environment. When you first step into a fast food restaurant, the smell of burgers and fries can be overwhelming (and really get your stomach growling!), but a few minutes later the smell will be barely noticeable. This reduced response to an unchanging stimulus is known as sensory adaptation. High-intensity stimuli, such as bright lights and loud noises, draw our attention because the situations that produce these stimuli, such as a nearby explosion, can have obvious consequences for our safety. We rarely have the luxury of paying attention to any single stimulus. In most cases, we experience divided attention, in which we attempt to process multiple sources of sensory information. Students walk to class without getting run over by a car while texting. These divided attention abilities are limited. We simply cannot process all the information converging simultaneously on our sensory systems. To prioritize input, we use selective attention or the ability to focus on a subset of available information and exclude the rest. These abilities may be disrupted in cases of attention deficit hyperactivity disorder (ADHD; Wimmer et al., 2015; also see Chapter 14). You can also thank the processes of selective attention and sensory adaptation for the fact that you are not constantly distracted by the sight of your own nose. Although your nose always appears in your peripheral vision, your brain ignores it in order to focus on more interesting and important things (except for right now, of course, since we have brought it to your attention—sorry!). We have all had the experience of watching events with others (sensation) and then being shocked by the different interpretations we hear of what just happened (perception).

Photo by Peter Kneffel/picture alliance via Getty Images Divided attention abilities are limited. Some people believe that heads-up displays for cars assist drivers with divided attention, while others believe the displays are too distracting.

chombosan/ Shutterstock.com We refer to the brain’s use of incoming signals to construct perceptions as bottom-up processing. For example, we construct our visual reality from information about light that is sent from the eye to the brain. However, the brain also imposes a structure on the incoming information, a type of processing known as top-down. In top-down processing, we use knowledge gained from prior experience with stimuli to perceive them. For example, a skilled reader has no trouble reading the following sentences, even though the words are jumbled: All you hvae to do to mkae a snetnece raedalbe is to mkae srue taht the fisrt and lsat letrtes of ecah wrod saty the smae. Wtih prcatcie, tihs porcses becoems mcuh fsater and esaeir. Selective attention, or our focus on a subset of input, prioritizes incoming information. However, we can sometimes be so focused that we miss important information. An astonishing 20 out of 24 expert radiologists completely missed the image of a gorilla superimposed on scans of lungs while scanning for signs of cancer.

skyhawk x/ Shutterstock.comSource: T. Drew, T., M. L.-H. Võ, & J. M. Wolfe (2013). “The Invisible Gorilla Strikes Again: Sustained Inattentional Blindness in Expert Observers,” Psychological Science, 24(9), 1848–1853. doi: 10.1177/0956797613479386. How can we explain our ability to read these sentences? First, we require bottom-up processing to bring the sensations of the letter shapes to our brain. From there, however, we use knowledge and experience to recognize individual words. Many students have learned the hard way that term papers must be proofread carefully. As in our example, if the brain expects to see a particular word, you are likely to see that word, even if it is misspelled—a mistake that is unlikely to be made by the literal, bottom-up processing of a computer spell-checker. Can we predict when the mind will use bottom-up or top-down processing? There are no hard and fast rules. Obviously, we always use bottom-up processing, or the information would not be perceived. It is possible that bottom-up processing alone allows us to respond appropriately to simple stimuli, like indicating whether you saw a flash of light. As stimuli become more complicated, like reading a sentence or recognizing a friend in a crowd, we are more likely to engage in top-down processing in addition to bottom-up processing. Our expectations greatly inform our perceptions, because our brains are constantly predicting what we will see (or hear, etc.) next. Measuring Perception Gustav Fechner (1801–1887) developed methods, which he called psychophysics, for studying the relationships between stimuli (the physics part) and perception of those stimuli (the psyche or mind part) (see Figure 5.2). Fechner’s careful methods not only contributed to the establishment of psychology as a true science but are still used in research today. Figure 5.2 Connecting the Physical World and the Mind. “Golden” rectangles, named for their proportions rather than colour, appear in art and architecture dating back to ancient Greece, but why are they attractive? Gustav Fechner (1801–1887) made many attempts to link physical realities with human psychological responses. He asked people to choose which rectangles are most pleasing or least pleasing. His results indicated that the most pleasing rectangle was fourth from the right. This rectangle is the closest to having golden proportions (1:1.618). Its sides have a ratio of 13:21.

© Cengage Learning; Photo Researchers, Inc/Alamy Stock Photo The methods of psychophysics allow us to establish the limits of awareness, or thresholds, for each of our sensory systems. The smallest possible stimulus that can be detected at least 50 percent of the time is known as the absolute threshold. Under ideal circumstances, our senses are surprisingly sensitive (see Figure 5.3). For example, you can see the equivalent of a candle flame about 48 kilometres (30 miles) away on a moonless night. We can also establish a difference threshold, or the smallest difference between two stimuli that can be detected at least 50 percent of the time (this is also referred to as a just-noticeable difference, or JND). The amount of difference that can be detected depends on the size of the stimuli being compared. As stimuli get larger, differences must also become larger to be detected by an observer, a phenomenon known as Weber-Fechner Law. For example, imagine you and your friend are each snacking on a small bag of chips—your chips are extremely salty, while theirs are only lightly salted. If someone were to come by and shake an equal amount of extra salt into each bag of chips, who is more likely to notice the added salt? Your friend—because this added salt is more likely to pass their difference threshold than your own, since your chips are already very salty to begin with. Figure 5.3 Absolute Sensory Thresholds. An absolute threshold is the smallest amount of sensation that can be processed by our sensory systems under ideal conditions. Moving from left to right in this image, we see that the absolute threshold for touch is the equivalent of feeling the wing of a fly fall on your cheek from a distance of about 1 centimetre (0.4 inch), the absolute threshold for olfaction is a drop of perfume in the air filling a six-room apartment, the absolute threshold for sweetness is the equivalent of about 5 grams (one teaspoon) of sugar in 7.5 litres (about two gallons) of water (the absolute threshold for bitter tastes is even more sensitive), the absolute threshold for hearing is the equivalent of the sound of a mosquito 3 metres (about 10 feet) away, and the absolute threshold for vision is seeing a candle flame about 48 kilometres (30 miles) away on a dark, clear night.

© Cengage Learning; Photos, left to right: Gladskikh Tatiana/ Shutterstock.com; Kuttelvaserova Stuchelova/ Shutterstock.com; Christopher Elwell/ Shutterstock.com; AlexRoz/ Shutterstock.com. Signal Detection Many perceptions involve some uncertainty. Perhaps you’re driving rather fast, and you think a distant car behind you might be a police officer. Do you slow down right away? Or do you wait until the car is close enough that you know for sure it’s a police officer? How do your personal feelings about making mistakes affect your decision? Would the cost of a ticket ruin your budget? According to Fechner’s work on the difference threshold, British Olympian Zoe Smith would be more likely to notice the difference between 1 and 2 kilogram weights than the difference between her new record of 121 kilograms in the clean and jerk event and the former record of 120 kilograms.

YURI CORTEZ/AFP/Getty Images/Newscom This type of decision making can have serious implications, such as in the case of decisions made by radiologists examining the results of mammograms for signs of cancer or by intelligence officers assessing the possibility of an attack. Is there reason for concern or not? This situation is different from the thresholds described earlier because it adds the cognitive process of decision making to the process of sensation. In other words, signal detection is a two-step process involving (a) the actual intensity of the stimulus, which influences the observer’s belief that the stimulus did occur, and (b) the individual observer’s criteria for deciding whether the stimulus occurred. Research on the detection of visual stimuli by primates has indicated that stimuli that remain undetected by the monkeys are associated with weaker and briefer frontal cortex activation compared to visual stimuli that are consciously reported (van Vugt et al., 2018). Another example of signal detection is a jury’s decision about whether a person is guilty. Based on frequently uncertain and conflicting evidence, jurors must weigh their concerns about convicting an innocent person (false alarm) or letting a real criminal go (miss). Experiments on signal detection provide insight into this type of decision making. In these experiments, trials with a single, faint stimulus and trials with no stimulus are presented randomly. The participant states whether a stimulus was present on each trial. The possible outcomes of this experiment are shown in Table 5.1. In the case of reading mammograms, we can use such experiments to help us understand why two people might respond differently, even if they were sensing the same information. Ideally, a radiologist would identify 100 percent of all tumours without any false alarms, but mammograms are not that easy to evaluate. A radiologist afraid of missing a tumour might identify anything that looks remotely like a tumour as the basis for more testing. Few cases of cancer would be missed (high hit rate), but many healthy patients would go through unnecessary procedures (high false alarm rate). In contrast, another radiologist might need a higher level of certainty about the presence of a tumour before asking for further tests. This would reduce the number of false alarms, but it would also run a higher risk of overlooking tumours (high miss rate).

Enlarge Image

Photos, left to right: fotomak/ Shutterstock.com; seeyou/ Shutterstock.com; taedong/ Shutterstock.com; Karen H. Ilagan/ Shutterstock.com. Does this mammogram indicate the woman has cancer or not? Many decisions we make are based on ambiguous stimuli. Signal detection theory helps us understand how an individual doctor balances the risks of missing a cancer and those of alarming a healthy patient.

BSIP/Getty Images Table 5.1 Possible Outcomes in Signal Detection Participant Response Stimulus Present Stimulus Absent Yes Hit False alarm No Miss Correct rejection Enlarge Table

Summary 5.1 Assessing Perception Concept Definition Example Absolute threshold

Karen H. Ilagan/ Shutterstock.com The smallest amount of stimulation that is detectable. Seeing light from a candle flame 48 kilometres (30 miles) away on a dark night. Difference threshold

YURI CORTEZ/AFP/Getty Images/Newscom The smallest difference between two stimuli that can be detecte

5-2 How Do We See? Vision, the processing of light reflected from objects, is one of the most important sensory systems in humans. Approximately 50 percent of our cerebral cortex processes visual information, in comparison to only 3 percent for hearing and 11 percent for touch and pain (Kandel & Wurtz, 2000; Sereno & Tootell, 2005). We will begin our exploration of vision with a description of the visual stimulus, and then we will follow the processing of that stimulus by the mind into a meaningful perception. Some types of snakes (vipers, boas, and pythons) can sense prey using infrared energy.

Ted Kinsman/Science Source

5-2a The Visual Stimulus Visible light, or the energy within the electromagnetic spectrum to which our visual systems respond, is a type of radiation emitted by the sun, other stars, and artificial sources such as a light bulb. As shown in Figure 5.4, light energy moves in waves, like the waves in the ocean. Wavelength, or the distance between successive peaks of waves, is decoded by our visual system as colour or shades of grey. The height, or amplitude, of the waves is translated by the visual system into brightness. Large-amplitude waves appear bright, and low-amplitude waves appear dim. Figure 5.4 Light Travels in Waves. The distance between two peaks in a light wave (wavelength) is decoded by the visual system as colour and the height, or amplitude, of the wave as brightness.

Enlarge Image

© Cengage Learning The human visual world involves only a small part of this light spectrum (review Figure 5.1). Gamma rays, x-rays, ultraviolet rays, infrared rays, microwaves, and radio waves lie outside the capacities of the human eye.

5-2b The Biology of Vision Human vision begins with the eye. The eye is roughly sphere shaped and about the size of a ping-pong ball. Its hard outer covering helps the fluid-filled eyeball retain its shape. Toward the front of the eye, the outer covering becomes clear and forms the cornea. The cornea begins the process of bending light to form an image on the back of the eye. Travelling light next enters the pupil, which is actually an opening formed by the muscles of the iris (see Figure 5.5). The iris, which means “rainbow” in Greek, adjusts the opening of the pupil in response to the amount of light present in the environment and to signals from the autonomic nervous system, described in Chapter 4. Arousal is associated with dilated pupils, while relaxation is associated with more constricted pupils. Figure 5.5 The Human Eye. Light entering the eye travels through the cornea, the pupil, and the lens before reaching the retina. Among the landmarks on the retina are the fovea, which is specialized for seeing fine detail, and the optic disk, where blood vessels enter the eye and the optic nerve exits the eye.

James P. Gilman, C.R.A./Phototake; © Cengage Learning Directly behind the pupil and iris is the main optical instrument of the eye, the lens. Muscles attached to the lens can change its shape, allowing us to accommodate, or adjust our focus to see near or distant objects. The muscles relax and the lens flattens in order to focus on distant objects, and the muscles contract and the lens becomes more spherical to focus on near objects. Behind the lens is the main chamber of the eye, and located on the rear surface of this chamber is the retina, a thin but complex network of neurons specialized for the processing of light. Located in the deepest layer of the retina are specialized receptors, the rods and cones, that transduce the light information. However, before light reaches these receptors, it must pass through layers of blood vessels and neurons. We normally do not see the blood vessels and neural layers because of sensory adaptation. As we mentioned previously in this chapter, adaptation occurs when sensory systems tune out stimuli that never change. Because the blood vessels and neural layers are always in the same place, we see them only under unusual circumstances, such as during certain ophthalmology (eye) tests. We can identify several landmarks on the surface of the retina. The blood vessels serving the eye and the axons that leave the retina to form the optic nerve exit at the optic disk. Because there are no rods and cones in the optic disk, each eye has a blind spot. Normally, we are unaware of our blind spots because perception fills in the missing details. However, if you follow the directions in Figure 5.6, you should be able to experience your own blind spot. Toward the middle of the retina is the fovea, which is specialized for seeing fine detail. When we stare directly at an object, the image of that object is projected onto the fovea. The fovea is responsible for central vision, as opposed to peripheral vision, which is the ability to see objects off to the side while looking straight ahead. Figure 5.6 Now You See It—Now You Don’t. There are no photoreceptors in the optic disk, producing a blind spot in each eye. We do not see our blind spots because our brain fills in the hole. You can demonstrate your blind spot by holding your textbook at arm’s length, closing one eye, focusing your other eye on the dot, and moving the book toward you until the stack of money disappears.

Enlarge Image

© Cengage Learning The image projected on the retina is upside down and reversed relative to the actual orientation of the object being viewed (see Figure 5.7). You can duplicate this process by looking at both sides of a shiny spoon. In the convex (or outwardly curving) side, you see your image normally. In the concave (or inwardly curving) side, you see your image as your retina sees it. Fortunately, the visual system easily decodes this image and provides realistic perceptions of the actual orientations of objects. Figure 5.7 What the Retina “Sees.” The image projected on the retina is upside down and reversed, but the brain is able to interpret the image to perceive the correct orientation of an object.

Ariwasabi/ Shutterstock.com; © Cengage Learning Structure of the Eye Copyright © Cengage Learning. Rods and Cones Rods and cones are named after their shapes. The human eye contains about 90 million rods and between 4 million and 5 million cones. Rods and cones are responsible for different aspects of vision. Rods are more sensitive to light than cones, and they excel at seeing dim light. As we observed previously, under ideal circumstances, the absolute threshold for human vision is the equivalent of a single candle flame from a distance of about 48 kilometres (30 miles; see Hecht, Shlaer, & Pirenne, 1942). Rods become more common as we move from the fovea to the periphery of the retina, so your peripheral vision does a better job of viewing dim light than your central vision does (see Figure 5.8). Before the development of night goggles, soldiers patrolling in the dark were trained to look to the side of a suspected enemy position rather than directly at their target. Figure 5.8 Distribution of Rods and Cones Across the Retina. In humans, cones, indicated by red, blue, and green dots, become less frequent as you move from the fovea to the periphery of the retina. The colours of the dots representing cones indicate the colours to which each shows a maximum response (see Figure 5.11). Rods (light brown dots) and cones are named according to their shapes.

© Cengage Learning This extraordinary sensitivity of rods has costs. Rods do not provide information about colour, nor do they provide clear, sharp images. Under starlight, normal human vision is 20/200 rather than the normal daylight 20/20. In other words, an object seen at night from a distance of 20 feet (about 6.1 metres) would have the same clarity as an object seen in bright sunlight from a distance of 200 feet (about 61 metres). Cones function best under bright light and provide the ability to see both sharp images and colour. Visual Pathways The rods and cones are the only true receptors of the visual system. When they absorb light, they trigger responses in four additional layers of neurons within the retina. Axons from the final layer of cells, the ganglion cells, leave the back of the eye to form the optic nerve. The optic nerves cross the midline at the optic chiasm (named after its X shape, or the Greek letter chi). At the optic chiasm, the axons closest to the nose cross over to the other hemisphere, while the axons to the outside proceed to the same hemisphere. This partial crossing means that if you focus straight ahead, everything to the left of centre in the visual field is processed by the right hemisphere, while everything to the right of centre is processed by the left hemisphere. This organization provides us with significant advantages when sensing depth, which we discuss later in the chapter. Beyond the optic chiasm, the visual pathways are known as optic tracts (see Figure 5.9). About 90 percent of the axons in the optic tracts synapse in the thalamus. The thalamus sends information about vision to the amygdala and the primary visual cortex in the occipital lobe. The amygdala uses visual information to make quick emotional judgments, especially about potentially harmful stimuli. The remaining optic tract fibres connect with the hypothalamus, where their input provides information about light needed to regulate sleep–wake cycles, discussed in Chapter 6, or with the superior colliculi of the midbrain, which manage a number of visually guided reflexes, such as changing the size of the pupil in response to light conditions. Figure 5.9 Visual Pathways. Visual information from the retina travels to the thalamus and then to the primary visual cortex in the occipital lobe.

© Cengage Learning The primary visual cortex (often referred to as V1) begins, but by no means finishes, the processing of visual input. Canadian-born neurophysiologist David Hubel, along with his Swedish collaborator Torsten Wiesel, received a Nobel Prize in 1981 for their ground-breaking research on the development of the visual system. Their research greatly expanded our knowledge of how the brain takes signals from the eye and produces the building blocks of a visual scene. The primary visual cortex responds to object shape, location, movement, and colour (Hubel & Livingstone, 1987; Hubel & Wiesel, 1959; Livingstone & Hubel, 1984). Two major pathways radiating from the occipital cortex continue the analysis of visual input (Goodale & Milner, 1992). The dorsal stream extends upward from V1 in the occipital into the parietal lobe. This is the “where” pathway that helps us process movement and localize objects in space. The ventral stream extends downward from V1 in the occipital lobe to the temporal lobe. This is the “what” pathway that responds to shape and colour and contributes to our ability to recognize objects and faces. Much of the evidence for these two distinct but interacting visual pathways comes from research conducted with Patient D.F. After surviving an accidental carbon monoxide poisoning, Patient D.F. developed visual apperceptive agnosia, a condition in which individuals are unable to recognize and name objects, despite having intact vision. Although Patient D.F. is unable to identify common objects, she is still able to orient her hands and fingers in an appropriate manner to pick up or manipulate objects (e.g., putting an envelope through a slot, or grasping a small object), thus indicating that the action-oriented “where” stream is distinct from the “what” stream (Goodale, Milner, Jakobson, & Carey, 1991; Whitwell, Milner, & Goodale, 2014).

5-2c Visual Perception and Cognition To see something requires the brain to interpret the information gathered by the eyes. How do you know your sweater is red or green, based on the information sent from the retina to the brain? How do you recognize your grandmother at your front door? Colour Vision Most of us think about colours in terms of the paints and crayons we used in elementary school. Any kindergartner can tell you that mixtures of red and yellow make orange, red and blue make purple, and yellow and blue make green. Mixing them all together produces a lovely muddy brown. Coloured lights, however, work somewhat differently (see Figure 5.10). The primary colours of light are red, green, and blue, and mixing them together produces white light, like sunlight. If you have ever adjusted the colour on your computer monitor or television, you know that these devices also use red, green, and blue as primary colours. Observations supporting the existence of three primary colours of light gave rise to a trichromatic theory of colour vision. Figure 5.10 Mixing Coloured Lights. The primary colours of paint might be red, yellow, and blue, but in the world of light, the primary colours are red, green, and blue.

© Cengage Learning Trichromatic theory is consistent with the existence of three types of cones in the retina that respond best to short (blue), medium (green), or long (red) wavelengths. Our ultimate experience of colour comes not from the response of one type of cone but from comparisons among the responses of all three types of cones (see Figure 5.11). Figure 5.11 Responses by Cones to Coloured Light. Our perception of colour results from a comparison of the responses of the red, green, and blue cones to light. A 550-nanometre (nm) light is perceived as yellow and produces a strong response in green cones, a moderate response in red cones, and little response in blue cones.

© Cengage Learning Colour deficiency occurs when a person has fewer than the typical three types of cones. We no longer use the term colour-blind, as this is not accurate. Most people with colour deficiencies see colour differently than someone with all three cone types. Very rarely, individuals have either one type of cone or none. To these people, the world appears to be black, white, and gray. Trichromatic theory does a good job of explaining colour deficiency, but it is less successful in accounting for other colour vision phenomena, such as colour afterimages. For example, if you stare at the yellow, green, and black flag in Figure 5.12 and then focus on the dot within the white rectangle to the right, you will see an afterimage of the American flag in its more traditional colours of red, white, and blue. Figure 5.12 Afterimages Demonstrate Opponent Process Theory. If you stare at the dot in the centre of the yellow, green, and black flag for a minute and then shift your gaze to the dot in the white space on the right, you should see the American flag in its traditional red, white, and blue.

© Cengage Learning An opponent process theory of colour vision does a better job than the trichromatic theory in explaining these colour afterimages. This theory proposes the existence of colour channels: a red–green channel and a blue–yellow channel. We cannot see a colour like reddish green or bluish yellow because the two colours share the same channel. The channels are “opponent” or competing. Activity in one colour group in a channel reduces activity in the other colour group. While most people have three different types of cones, allowing for the perception of approximately 1 million colours, tetrachromats have four types of cones, which exponentially increases the number of colours they may be able to perceive. Although up to 12 percent of women are thought to be tetrachromatic (due to mutations on the X chromosome), identifying tetrachromats is a challenging task (Jordan, Deeb, Bosten, & Mollon, 2010). It is also believed that while having a fourth cone creates the potential for dramatically enhanced colour perception, experience is also vital. Australian artist Concetta Antico is a tetrachromat whose fascination with paints at a very young age may have contributed to her expanded colour perception.

Courtesy of Concetta Antico Psychology Takes on Real-World Problems Colour and Accessible Web Design Now that you have an understanding of colour perception, we can consider one of the practical problems associated with individual differences in colour vision. Between 7 and 10 percent of males and about 0.4 percent of females have a form of red–green colour deficiency (see Figure 5.13). Males are more affected than females because the genes for the pigments used by red and green cones are located on the X chromosome, making red–green colour deficiency a sex-linked condition (see Chapter 3). Smaller numbers of people lack blue cones (0.0011 percent) or cones altogether (0.00001 percent). Given the frequency of colour deficiency, making visual materials accessible to people with all types of colour vision is a serious concern. Figure 5.13 Detecting Colour Deficiency. The Ishihara Colour Test, designed by Shinobu Ishihara in 1917, is a standard method for detecting colour deficiency. The test is printed on special paper, so the recreated image here would not be considered a valid basis for diagnosing colour deficiency.

PRISMA ARCHIVO/Alamy Stock Photo Colour can be an effective tool for designing exciting and engaging websites, but many graphic web designers, who typically have excellent vision, fail to consider how the site might look to a person with a colour deficiency. A website that fails to consider the experience of individuals with colour deficiency not only risks losing business, but may also be open to penalties and fines. The Accessibility for Ontarians with Disabilities Act (AODA) requires that websites belonging to public sector organizations or private or nonprofit organizations with at least 50 employees must meet Web Content Accessibility Guidelines (WCAG 2.0). Currently, these websites must meet the criteria for Level A, which includes the rule that colour should never be the only means of conveying information. For example, if a user is entering information into a web form, incorrectly entered information (e.g., a letter typed into the box for a phone number) should not only be indicated by colour (e.g., the box turning red), but by some other means as well (e.g., an icon). As of January 2021, Ontario websites will need to conform to Level AA, which goes beyond colour to including rules for specific contrast ratios. The strong contrast between black letters on the white pages makes text easy to read for most people. Coloured text against a coloured background might add interest, but it runs the risk of being harder to read, especially when reds and greens are used. According to Level AA standards, the visual presentation of text and images of text has to have a contrast ratio of at least 4.5:1 (with certain exceptions). As shown in Figure 5.14, online resources simulate how a web page looks to a person with colour deficiency, which helps designers maximize accessibility. Figure 5.14 Making Websites Accessible. Web designers have found colours that work for people with typical colour vision and people who have colour deficiency. This set of colours shows how different shades would be seen by people with typical vision and by people with three of the most common forms of colour deficiency. Even though these colours are seen differently by the three groups, nobody mistakes one shade for another.

Enlarge Image

Paul Dronsfield/Alamy Stock Photo; © Cengage Learning Returning to our green, yellow, and black flag, how can we use opponent process theory to explain our experience of the red, white, and blue afterimage? By staring at the flag, you fatigue some of your visual neurons. Because the colour channels compete, reducing activity in one colour group in a channel, such as green, increases activity in the other group, which is red. Fatiguing green, black, and yellow causes a rebound effect in each colour channel, and your afterimage looks red, white, and blue. (Black and white also share a channel.) If you stare at an image of a real red, white, and blue flag and then look at a white piece of paper, your afterimage looks like our green, black, and yellow illustration. Which of these two theories of colour vision, trichromatic theory or opponent process theory, is correct? The trichromatic theory provides a helpful framework for the functioning of the three types of cones in the retina. However, as we move from the retina to higher levels of visual analysis, the opponent process theory seems to fit observed phenomena neatly. Both theories help us understand colour vision but at different levels of the visual system. Recognizing Objects We asked earlier how your brain uses incoming visual signals to recognize your grandmother. A bottom-up approach assumes that as information moves from the retina to higher levels of visual processing, more complicated responses are built from simpler input. In this hierarchical model, the result would be a hypothetical “grandmother cell,” or a single cell that could combine all previous input and processing to tell you that your grandmother is at the door. Although the hierarchical model is attractive in many ways, it does not fit perfectly with what we know about the visual system. First, we would need a large number of single neurons to respond to all the objects and the events that we can recognize visually. In addition, the hierarchical model is unable to account for top-down processing. Figure 5.15a may appear to be a random pattern of black dots on a white background. Figure 5.15b may not appear to be a recognizable object at all. The sensations produced by these stimuli lead to no meaningful perceptions. However, once we tell you that the first image is a Dalmatian dog and the second image is a cow, you can instantly pick out their shapes. Now that you know what the images are, you will probably never see them again the way you did initially. Recognizing these objects requires knowledge and memory of what Dalmatians and cows look like. It is unlikely that a single cortical cell acting as a Dalmatian or cow detector could incorporate such complex inputs from memory. Figure 5.15 Can You Figure Out What These Images Are? (a) This might look like a splattering of black dots on a white page until you learn that it represents a Dalmatian dog. (b) Top-down processing ensures that once you know this is a photo of a cow, you can pick out its features easily.

© Cengage Learning Sources: (a) From Richard L. Gregory, “The Medawar Lecture 2001 Knowledge for vision: Vision for knowledge,” Phil. Trans. R. Soc. B 2005, 360, 1231–1251, by permission of the Royal Society. (b) From American Journal of Psychology. Copyright © 1951 by the Board of Trustees of the University of Illinois. Used with permission of the University of Illinois Press. K. M. Dallenbach, “A puzzle-picture with a new principle of concealment,” 64:3 (July 1951): pp. 431–433. If we don’t use single cells to recognize objects, how can we accomplish this task? The visual system might perform a mathematical analysis of the visual field (De Valois & De Valois, 1980). While the hierarchical model implies a reality built out of individual bars and edges, the mathematical approach suggests that we analyze patterns of lines. The simplest patterns of lines are gratings, as shown in Figure 5.16. Gratings can vary along two dimensions: frequency and contrast. High-frequency gratings have many bars in a given distance and provide fine detail, while low-frequency gratings have relatively few bars. High-contrast gratings have large differences in intensity between adjacent bars, like black next to white. The print you are reading in is an example of high contrast because the black letters are quite different from the white background. Low-contrast gratings have subtler differences in intensity between bars, such as dark gray next to black. Figure 5.16 Features of Gratings. An alternative to the hierarchical model suggests that the visual system analyzes the visual environment as a collection of patterns, like these gratings. Gratings vary in frequency (number of bars in a given distance) and contrast (the difference in light intensity from one bar to the next).

© Cengage Learning Observing responses to gratings gives us a window into the visual capacities of other species. At a certain point of contrast and frequency, gratings look plain grey. Animals can be trained to make a distinction between gratings and grey circles. For example, if a bird is rewarded with food for pecking at a disk with a grating but not for pecking a uniform grey disk, any performance that is better than 50-50, or chance, indicates that the bird can see the difference between the grating and the grey. We can graph the range of gratings that are visible to the observer as a function of their contrast and frequency. Figure 5.17 illustrates the visible ranges for human adults and cats. Compared to human adults, cats see less detail. However, cats see large (low-frequency), low-contrast objects better than humans do. Large, low-contrast shadows on the wall may get kitty’s attention but not yours. You will think kitty is chasing ghosts again. Figure 5.17 What Do Cats See? Using gratings, we get a window into the visual world of the cat. By comparing gratings to a uniform grey disk, we can learn when a grating with a certain contrast and frequency simply looks grey to humans or cats. We can see better detail than kitty, but they see large shadows that we don’t even notice.

Julie Src/ Shutterstock.com; © Cengage Learning Recognizing Faces Of all the objects we observe, recognize, and distinguish among, faces may be the most important. Infants as young as 2 days old appear capable of recognizing their mother’s face (Bushnell, 2001), and by the time they are 6 months old, research indicates that infants’ brains respond to faces in a way that is distinct from other objects (de Heering & Rossion, 2015; Farzin, Hou, & Norcia, 2012). There is even evidence that fetuses in the third trimester prefer looking at dots configured to resemble a face (two dots on top, one dot below) versus an inverted face (one dot on top, two dots below; Reid, Dunn, Young, Amu, Donovan, & Reissland, 2017). As adults, research suggests that humans are capable of recognizing an average of 5000 faces (with a lot of individual variability; Jenkins, Dowsett, & Burton, 2018). We also know that by adolescence, the visual processing of faces activates a particular area of the brain known as the fusiform face area (FFA), part of the ventral pathway, located in the inferior temporal cortex (Kanwisher, McDermott, & Chun, 1997). While it is well-established that the FFA becomes especially active during the processing of faces as opposed to other objects, there is debate over whether this is due to evolutionary (faces are special) or expertise (faces are familiar) reasons. According to the evolutionary argument, face perception served such an important role in our evolutionary history that part of our brain became dedicated to processing faces (Kanwisher et al., 1997). According to the expertise argument, the FFA is more of a generalized expertise area, and humans just happen to be experts at face perception. As support for this, researchers have shown that the FFA also becomes activated when people who are experts in a nonface domain (e.g., bird watchers) categorize stimuli from that domain (e.g., Gauthier, Skudlarski, Gore, & Anderson, 2000). However, other research indicates the neural mechanisms that underlie face processing are unique from those underlying the processing of similarly complex and identifiable stimuli (e.g., Chinese characters for native Chinese speakers; Fu, Feng, Guo, Luo, & Parasuraman, 2012). While the debate regarding the specific role of the FFA continues, it is clear that faces (and facelike configurations) carry special significance in the typically developing human visual system (e.g., Lochy, Zimmermann, Laguesse, Willenbockel, Rossion, & Vuong, 2018). Although it is generally believed that face processing is a basic, universal visual function, cross-cultural research indicates that East Asians and Western Caucasians use different eye movement strategies while scanning faces (Kelly, Miellet, & Caldara, 2010). Eye-tracking shows that Western Caucasians focus on the eye and mouth regions of a face (reflective of a more locally focused, feature-based recognition strategy), while East Asians focus on the nose area (reflective of a more globally focused, face-centred recognition strategy; Figure 5.18). These differences in facial processing reflect broader cultural differences in cognitive style which are discussed further in Chapter 10. Figure 5.18 Face Processing Across Cultures. Eye-tracking shows that Western participants focus on the eyes and the mouth of a face, whereas East Asian participants focus on the nose. To control for possible social norm influences (eye contact is considered rude in some East Asian cultures), the researchers investigated scanning approaches using sheep and make-believe stimuli known as Greebles. The same principles held for these alternate stimuli, possibly reflecting cultural differences in the emphasis on objects and context.

© Cengage Learning Gestalt Psychology As we observed in Chapter 1, a group of German researchers known as the Gestalt psychologists tackled visual perception with a number of ingenious observations. The word Gestalt is derived from the German word for “shape.” These psychologists objected to efforts by Wilhelm Wundt and the structuralists to reduce human experience to its building blocks, or elements. Instead, the Gestalt psychologists argued that some experiences lose information and value when divided into parts. The main thesis of the Gestalt psychologists, as stated by Kurt Koffka, maintains, “It is more correct to say that the whole is something else than the sum of its parts” (Koffka, 1935, p. 176). Psychology as a Hub Science Face Recognition: Humans and Machines Working Together Although we have learned that humans are quite good at recognizing faces, we also know that people are prone to errors. A security guard scanning a sea of faces at a Beyoncé concert for a suspect is likely to get tired, bored, or break into dance. Computer vision is an interdisciplinary field that involves computers extracting information and making decisions based on information from digital images or videos. For example, a facial recognition algorithm can scan images of faces and identify how confident it is that a face is a match for a target person. While a machine won’t make errors caused by boredom or distraction, computer vision systems are also subject to misses and false alarms. Researchers at Harvard University have demonstrated that for visual search tasks such as recognizing a face in a crowd, a combined human and machine effort may be particularly useful (Valeriani & Poli, 2019). These researchers developed two artificial intelligence (AI) systems that are used in combination with human participants. The first is a facial recognition algorithm as described above—one that identifies matching faces and reports how confident it is in the match. On average, the algorithm has an 84 percent accuracy rate. The second AI system is a brain-computer interface (BCI) that analyzes EEG signals from human participants who are completing the same facial recognition task. The BCI is able to predict confidence levels of the human decision makers at the moment they make their decision about whether the target is present or absent. Compared to the facial recognition algorithm, the human participants perform somewhat worse, with an average accuracy rate of 72 percent. So if the choice is solely between a machine or a human, your best bet would be to allow a machine to perform this task. But why choose? The researchers found that they could get even more accurate results (.85 percent) than the facial recognition algorithm alone when they created a new algorithm that was able to combine the decisions and confidence levels of multiple human participants, along with the decision and confidence levels of the facial recognition algorithm. In short, the more judges, the more accurate the judgment, particularly if one of those judges is a machine!

Adapted from “Mapping the Backbone of Science,” by K. W. Boyack et al., 2005, Scientometrics, 64(3), 351–374. With kind permission from Springer Science+Business Media.

DAVID MCNEW/AFP/Getty Images According to the Gestalt psychologists, we are born with built-in tendencies to organize incoming sensory information in certain ways. This natural ability to organize simplifies the problem of recognizing objects (Biederman, 1987). One organizing principle suggests that we spontaneously divide a scene into a main figure and ground. We frequently assume that the figure stands in front of most of the ground, and it seems to have more substance and shape. It is possible to construct ambiguous images, like the vase on this page, in which the parts of the image seem to switch roles as figure or ground. A second Gestalt principle is proximity (see Figure 5.19). Objects that are close together tend to be grouped together. The dots that make up our Dalmatian in Figure 5.15a are close together, suggesting they belong to the same object. The principle of similarity states that similar stimuli are grouped together. On a close examination of the dog image, the dots that make up the dog are similar to one another and slightly different (more rounded perhaps) than the dots making up the remainder of the image. Figure 5.19 The Gestalt Principles of Proximity and Similarity. The set of dots in (a) do not appear to have any particular relationship with one another, but when we colour rows in (b), we suddenly see the dots in columns or rows. Moving two columns or rows slightly closer to each other in (c) makes us see the array differently too.

© Cengage Learning The principle of continuity suggests that we assume that points that form smooth lines when connected probably belong together (see Figure 5.20). In our dog picture, continuity helps us see the border of the curb or sidewalk and the ring of shadow around the base of the tree. Continuity is perhaps a little less useful in identifying the dog, although we can pick out the lines forming the legs. Figure 5.20 The Gestalt Principle of Continuity. The Gestalt principle of continuity says that we perceive points forming a smooth line as belonging to the same object. If you follow this knot, you can see that it is formed by two objects, but our initial perception is of a single form.

© Cengage Learning Closure occurs when people see a complete, unbroken image even when there are gaps in the lines forming the image (see Figure 5.21). We use this approach in viewing the dog in Figure 5.15a when we “fill in the blanks” formed by the white parts of its body. Figure 5.21 The Gestalt Principle of Closure. Because of the principle of closure, we “fill in the blanks” to see a single object, the World Wildlife Fund logo, although it is made up of several objects.

© Cengage Learning Finally, the Gestalt psychologists believed in the principle of simplicity, which suggests that we will use the simplest solution to a perceptual problem. This principle may help explain the fun in pictures like that of our Dalmatian dog. It is simpler to assume that this is a random splash of black dots on white background. Finding a hidden picture within the dots is not the simplest solution, which may account for our surprise. Recognizing Depth An image projected onto the retina is two dimensional, as flat as the sheet of paper or screen on which these words appear. Somehow, the brain manages to construct a three-dimensional (3D) image from these data. Adelbert Ames Jr. constructed a room that was named in his honour, the Ames Room, which illustrated vulnerabilities in our depth perception (Ittleson, 1952). When viewed directly from the front, the room appears to be a rectangle. People within the room, shown in Figure 5.22, seem to be larger or smaller than normal. This distortion of perceived size results from the room’s ability to confuse our judgment of distance. Figure 5.22 The Ames Room Tricks Our Depth Perception. Many distance cues, such as the apparently rectangular windows, conspire to make these two people look different. The person on the right is much closer to us than the person on the left. The diagram shows the actual layout of the Ames Room.

Field Museum Library/Contributor/Archive Photos/Getty Images; © Cengage Learning To construct a 3D image, we use both monocular cues (one eye) and binocular cues (two eyes). Many monocular cues are found in paintings because the artists attempt to provide an illusion of depth in their two-dimensional pieces. The use of linear perspective, or the apparent convergence of parallel lines at the horizon, by Italian artists during the 15th century provided a realism unknown in earlier works. Linear perspective revolutionized the video game and movie industries, beginning humbly with Sega’s Zaxxon in 1982 and advancing to the ever more realistic environments of Halo, Pixar’s animated films, and the 2009 film Avatar. Other monocular cues include texture gradients and shading. We can see more texture in objects that are close to us, while the texture of distant objects is relatively blurry. Shading and the use of highlights can be used to suggest curved surfaces. The Gestalt psychologists believe we naturally see the difference between objects and their background, but this figure is designed to make us switch back and forth from the vase to the background faces. This vase was designed to commemorate an anniversary of Queen Elizabeth II of England (face on the right) and her husband, Prince Philip (face on the left).

SSPL/Science Museum / Art Resource, NY Among the most powerful monocular depth cues is occlusion, or the blocking of images of distant objects by closer objects. We also use relative size to judge the distance of objects, although this method requires you to be familiar with the real size of an object. We know how big people are. When the retinal image of a person is small, we infer that the person is farther from us than when the retinal image of a person is larger. Several illusions result from our use of monocular cues to judge depth. Relative size helps to explain the moon illusion. You may have noticed that the moon appears to be larger when it is just above the hills on the horizon than when it is straight overhead. The moon maintains a steady orbit 385 000 kilometres (239 000 miles) above the Earth. How can we account for the discrepancy in its apparent size? When viewed overhead, the moon is seen without intervening objects, such as trees and hills, that might provide cues about its size and distance. However, when viewed near the horizon, we see the moon against a backdrop of familiar objects whose sizes we know well. We expect trees and hills to be smaller at the horizon than when they are close to us, and if we group the moon with those objects, we adjust its apparent size as well. The next time you are viewing the full moon as it rises over the hills, form a peephole with your hand, and you will see the moon in its normal small size. Although some researchers argue that atmospheric differences between the two viewpoints may contribute to the illusion, viewing the moon through your hand should demonstrate that most of the effect arises from your use of other objects to judge distance. In the Müller–Lyer illusion, shown in Figure 5.23, we see the line with outward-pointing arrowheads as being farther from our position, even though the main lines project images of equal length on the retina. The Ponzo illusion, shown in Figure 5.24, confounds size and distance judgments in a similar fashion. The parallel lines signal depth, leading us to believe that the upper horizontal line is farther away than the lower line. If both lines project the same image on the retina, the more distant line must be longer. Figure 5.23 The Müller–Lyer Illusion. You might find it hard to believe that the two red vertical lines are actually the same length.

© Cengage Learning Figure 5.24 The Ponzo Illusion. We perceive depth because of linear perspective, which in turn make us see the upper horizontal bar as more distant than the lower bar. Even though they are the same length, the bar perceived as more distant looks longer.

© Cengage Learning So far, we have discussed monocular cues that involve a person and a scene that is not moving. The introduction of motion can heighten the impression of depth. As you ride in a car, focus your gaze at a distant point. The objects you pass will appear to be moving in the opposite direction of your car, with closer objects appearing to move faster than distant objects. Next, focus on a point about midway between you and the horizon. Now, the closer objects will continue to move in the opposite direction, but more distant objects appear to be travelling with you. This motion parallax has been used to enhance the 3D feeling in video games. One of our most effective depth cues is retinal disparity. Because this cue requires the use of both eyes, we refer to retinal disparity as a binocular cue. Predator species, including ourselves, have eyes placed in the front of the head facing forward. As a result of this configuration, the visual scenes observed by the two eyes are different and overlapping, as shown in Figure 5.25. The differences between the images projected onto each eye are called disparities. These disparities do not tell us how far away an object is. Instead, they provide information about the relative distance between two objects in the visual field. As the distance between the objects increases, disparity increases. To illustrate the sensitivity of this system, you can identify an object as being 1 millimetre (about 0.04 inch) closer than another at a distance of 1 metre (about 3.3 feet) from your body, or a difference of 0.1 percent (Blake & Sekuler, 2006). Figure 5.25 Retinal Disparity. The right and left eyes see slightly overlapping versions of the visual scene in front of us. We can use the retinal disparity, or discrepancy between the locations of two objects on the two retinas, as a sensitive depth cue.

© Cengage Learning Why would this binocular depth system be an advantage to predators? Most prey species do an excellent job of hiding, often aided by an appearance that blends into the nearby environment. However, retinal disparity allows us to spot tiny variations in the depths of objects in the visual field. This might make an animal stand out against its background, even though it is well camouflaged. Retinal disparity has been used to identify camouflaged military equipment and counterfeit currency. Retinal disparity is imitated by cameras used to film 3D movies, which have two lenses separated by about the same distance as our two eyes. Video games, such as Minecraft, incorporate many monocular cues to provide an experience of depth. The standard-sized blocks used to build structures in the game are separated by lines, which when placed in a row provide linear perspective. Texture gradients, shading, and relative size (players understand the size of the blocks well) also contribute to perceived depth.

veryan dale/Alamy Stock Photo

5-2d Developmental and Individual Differences in Vision Although human infants can’t report what they see, we can take advantage of their longer gazing at patterns than at uniform stimuli, like a patch of a single colour. This allows us to construct graphs of the contrasts and the frequencies to which children respond, similar to those we saw previously for cats. Based on these analyses, we know that human infants see everything human adults see but with less detail. To see well, the infant also needs more contrast than the adult. These findings help explain children’s preferences for large, high-contrast objects. The photographs shown below provide insight into the visual world of the infant. Frequencies that cannot be seen by the infant have been removed from each photograph. Other research shows that infants as young as 4 months not only show binocular disparity, but also show normal adult responses to colour (Bornstein, Kessen, & Weiskopf, 1976). Other depth cues discussed previously develop early too. Infants as young as 2 months understand occlusion (Johnson & Aslin, 1995), and the use of the relative size of objects to judge depth appears between the ages of 5 and 7 months (Granrud, Haake, & Yonas, 1985). As previously mentioned, infants’ abilities to perceive faces also develop quite rapidly, as 2-day-old newborns spend more time gazing at their mothers’ faces than at a stranger’s face (Bushnell, 2001; also see Chapter 11). The two lenses of 3D cameras are separated by about 5 centimetres (2 inches), mimicking the distance between two human eyes that makes retinal disparity possible.

YOSHIKAZU TSUNO/AFP/Getty Images/Newscom Predictable changes occur in other aspects of human vision as we grow older. Accommodation of the lens, which allows us to change focus from near to far objects, becomes slower beginning in middle adulthood (keep this in mind the next time you try to show your dad something on your phone while he’s watching TV—it might take his eyes a minute or two to focus on the close-up image!). Older adults respond more slowly to changes in brightness, such as leaving a dark theatre into the sunlight. The muscles of the iris lose their elasticity, so pupils remain smaller, further reducing vision by limiting the amount of light that enters the eye. The lens of the eye begins to yellow, which protects the eye from ultraviolet radiation but affects the perception of colour. We can filter out the frequencies and contrast that a baby cannot see to simulate what the world looks like to an infant.

Enlarge Image

Felix Mizioznikov/ Shutterstock.com At any age, individual differences shape what people see. In addition to the colour deficiencies discussed previously, people differ in their abilities to see near and far objects. Those who deviate from the average often wear corrective lenses or undergo laser surgery to reshape the cornea. The most common visual problems result from eyeball length, with elongated eyeballs interfering with a person’s vision for distant objects (nearsightedness) and shortened eyeballs interfering with vision for close-up objects (farsightedness), as in reading. Vision is also affected by astigmatism, which means that the surface of the cornea is uneven. You can test yourself for astigmatism by looking at Figure 5.26. Children with strabismus, a condition where the eyes do not align properly, may fail to develop binocular depth perception if the condition is left uncorrected. Adults who develop strabismus are likely to experience double-vision, as the brain perceives two non-corresponding images of the same object. Children with strabismus do not typically experience double-vision because their brain instead chooses to focus on only one incoming image and suppresses the other. However, this suppression can lead to other complications such as amblyopia (commonly known as “lazy eye”). Canadian actor Ryan Gosling is one of a number of celebrities who appear to have a form of strabismus.

DFree/ Shutterstock.com Figure 5.26 Astigmatism. If you have astigmatism, which results from an uneven surface of your corneas, some spokes of this figure will appear darker than others.

© Cengage Learning Summary 5.2 Important Features of the Visual System Feature Significance Cornea

Argosy Publishing, Inc. Bends light toward the retina. Pupil

Argosy Publishing, Inc. Forms a

5-3 How Do We Hear? We have spent a considerable amount of time on the sense of vision, which might be considered a dominant source of information for humans. However, when Helen Keller, who was both blind and deaf, was asked which disability affected her the most, she replied that blindness separated her from things, while deafness separated her from people. Audition, our sense of hearing, not only allows us to identify objects in the distance but also plays an especially important role in our ability to communicate with others through language.

5-3a The Auditory Stimulus Sound begins with the movement of an object, setting off waves of vibration in the form of miniature collisions between adjacent molecules in air, liquid, or solids. Because sound waves require this jostling of molecules, sound cannot occur in the vacuum of space, which contains no matter. Those explosions we enjoy in Star Wars films are great entertainment but not good science. Earlier in this chapter, we described light energy as waves with different amplitudes and frequencies. Sound waves possess the same dimensions. However, in the case of sound, the height or amplitude of the wave is encoded as loudness or intensity and the frequency of the wave is encoded as pitch. High-amplitude waves are perceived as loud, and low-amplitude waves are perceived as soft. High-frequency waves (many cycles per unit of time) are perceived as high pitched, whereas low-frequency sounds are low pitched. In sound, amplitude is measured in units called decibels (dB), and frequency is measured in cycles per second, or hertz (Hz; see Figure 5.27). Figure 5.27 Features of Sound. Like the light energy we see, sound waves are characterized by frequency and amplitude. We perceive frequency as the pitch of the sound (high or low), measured in hertz (Hz), and we perceive amplitude as the loudness of the sound, measured in decibels (dB).

© Cengage Learning In addition to dizziness and nausea, being exposed to infrasound makes people report feelings of chills down the spine, fear, and revulsion, even though they cannot consciously detect the sound. Some scientists believe that infrasound produced in certain places leads people to conclude the places are haunted. As we observed in the case of the light spectrum, parts of the auditory spectrum are outside the range of human hearing. Ultrasound stimuli occur at frequencies above the range of human hearing, beginning around 20 000 Hz (see Figure 5.28). Ultrasound can be used to clean jewellery or your teeth or to produce noninvasive medical images. Infrasound refers to frequencies below the range of human hearing, or less than 20 Hz. Many animals, including elephants and marine mammals, use infrasound for communication. Infrasound is particularly effective in water because it allows sound to travel long distances. Figure 5.28 Range of Hearing. Ultrasounds are above the range of human hearing, and infrasounds are below the range of human hearing.

Enlarge Image

Photos, left to right: pathdoc/ Shutterstock.com; Mike Hill/Alamy Stock Photo; Eric Carr/Alamy Stock Photo; Erik Lam/ Shutterstock.com; Keith J. Smith/Alamy Stock Photo; Matthias Clamer/Getty Images

5-3b The Biology of Audition The fetus has no bubble of air in the middle ear, having never been exposed to air. Because fluids do a better job than air of transmitting sound waves, there is good evidence that the fetus can hear outside sounds, such as mother’s voice, quite well during the final trimester of pregnancy. Human audition begins with an ear located on either side of the head. The components that make up the ear are divided into three parts: the outer ear, the middle ear, and the inner ear (see Figure 5.29). Figure 5.29 Parts of the Ear. The human ear is divided into the outer, middle, and inner ear.

© Cengage Learning The outer ear consists of the structures that are visible outside the body. The pinna, the outer visible structure of the ear, collects and focuses sounds, like a funnel. In addition, the pinna helps us localize sounds as being above or below the head. Sounds collected by the pinna are channelled through the auditory canal, which ends at the tympanic membrane, or eardrum, at the boundary between the outer and the middle ear. The boundary between the middle and the inner ear is formed by another membrane, the oval window. The gap between these two membranes is bridged by a series of tiny bones. The purpose of these bones is to transfer sound energy from the air of the outer and middle ear to the fluid found in the inner ear. Sound waves are weakened as they move from air to water. When you try to talk to friends underwater, the result is rather garbled. Without the adjustments provided by these small bones, we would lose a large amount of sound energy as the sound waves moved from air to liquid. The inner ear contains two sets of fluid-filled cavities embedded in the bone of the skull. One set is part of the vestibular system, which we will discuss later in this chapter. The other set is the cochlea, from the Greek word for “snail.” When rolled up like a snail shell, the human cochlea is about the size of a pea. It contains specialized receptor cells that respond to vibrations transmitted to the inner ear. The movement of tiny hair cells in the inner ear produces neural signals that travel to the brain.

Prof. P.M. Motta/Univ. “La Sapienza”, Rome/Science Source The cochlea is a complex structure, which is better understood if we pretend to unroll it (see Figure 5.30). The cochlea may be divided into three parallel chambers divided from one another by membranes. Two of these chambers, the vestibular canal and the tympanic canal, are connected at the apex of the cochlea, or the point farthest from the oval window. Vibrations transmitted by the bones of the middle ear to the oval window produce waves in the fluid of the vestibular canal that travel around the apex and back through the tympanic canal. Lying between the vestibular and the tympanic canals is the cochlear duct. The cochlear duct is separated from the tympanic canal by the basilar membrane. Resting on top of the basilar membrane is the organ of Corti, which contains many rows of hair cells that transduce sound energy into neural signals. Each human ear has about 15 500 of these hair cells. Figure 5.30 Perception of Pitch. Sound waves produce peak responses on the basilar membrane according to their frequencies. Like the strings on a musical instrument, high tones produce the greatest response at the narrow, stiff base of the basilar membrane, while low tones produce the greatest response at the wide, floppy part of the basilar membrane near the apex. Sound waves travel through the cochlea from the oval window, around the apex, and back to the round window. The waves cause movement of tiny hair cells in the cochlear duct, which we perceive as sound.

© Cengage Learning As waves travel through the cochlea, the basilar membrane responds with a wavelike motion, similar to the crack of a whip. The movement of the basilar membrane causes the hair cells of the organ of Corti to move back and forth within the fluid of the cochlear duct. Bending the hair cells stimulates the release of neurotransmitters onto the cells of the auditory nerve. The basilar membrane needs to move very little before the hair cells are stimulated. If the hairlike structures extending from the top of the hair cells were the size of the Eiffel Tower in Paris, the movement required to produce a neural response would be the equivalent of 1 centimetre (Hudspeth, 1983). As we mentioned earlier, hair cells stimulate axons forming the auditory nerve. One branch of each auditory nerve cell makes contact with the hair cells, while the other branch proceeds to the medulla of the brainstem. From the medulla, sound information is sent to the midbrain, which manages reflexive responses to sound, such as turning toward the source of a loud noise. In addition, the midbrain participates in sound localization, or the identification of a source of sound. The midbrain passes information to the thalamus, which in turn sends sound information to the primary auditory cortex, located in the temporal lobe. The primary auditory cortex conducts the first basic analysis of the wavelengths and amplitudes of incoming information (see Figure 5.31). Surrounding the primary auditory cortex are areas of secondary auditory cortex that respond to complex types of stimuli, like clicks, noise, and sounds with particular patterns. Figure 5.31 Auditory Cortex. The auditory cortex is located in the temporal lobe. The primary auditory cortex processes basic features of sound while the surrounding secondary auditory cortex processes more complex sounds, such as clicks and general noise.

© Cengage Learning

5-3c Auditory Perception and Cognition Now that we have an understanding of the structures and the pathways used to process the sensations that lead to the perception of sound, we turn our attention to the brain’s interpretation and organization of these sounds in terms of pitch, loudness, and spatial localization. Pitch Perception Perception of pitch begins with the basilar membrane of the cochlea (see Figure 5.30). Place theory suggests that the frequency of a sound is correlated with the part of the basilar membrane showing a peak response. The base of the basilar membrane, closest to the oval window, is narrow and stiff. In contrast, at its farthest point near the apex, the basilar membrane is wide and flexible. If you are familiar with stringed instruments like guitars, you know that high tones are produced by striking the taut, small strings, and low tones are produced by striking the floppy, wide strings. The same principle holds for the basilar membrane. High-frequency tones produce the maximum movement of the basilar membrane near the base, while low-frequency tones produce maximum movement near the apex. The hair cells riding above these areas of peak movement show a maximum response. Place theory works well for sounds above 4000 Hz, which is about the frequency produced by striking the highest key on a piano, C8. Below frequencies of 4000 Hz, the response of the basilar membrane does not allow precise localization. In these cases, we appear to use another principle described as temporal theory, in which the patterns of neural firing match the frequency of a sound. Perceiving Loudness Humans can perceive sounds that vary in intensity by a factor of more than 10 billion, from the softest sound we can detect up to the sound made by a jet engine at takeoff, which causes pain and structural damage to the ear. Table 5.2 identifies the intensity levels of many common stimuli, measured in the logarithmic decibel scale. Our perception of loudness does not change at the same rate as actual intensity. When the intensity of a sound stimulus is 10 times greater than before, we perceive it as being only twice as loud (Stevens, 1960). Table 5.2 Loudness of Common Sounds Source of Sound Intensity (measured in decibels, or dB) Threshold of hearing 0 Rustling leaves 10 Whisper 20 Normal conversation 60 Busy street traffic 70 Vacuum cleaner 80 Water at the foot of Niagara Falls 90 iPod with standard earbuds 100 Front rows of a rock concert 110 Propeller plane at takeoff 120 Threshold of pain/machine gun fire 130 Military jet takeoff 140 Instant perforation of the eardrum 160 Enlarge Table

The frequency of a sound interacts with our perception of its loudness. Humans are maximally sensitive to sounds that normally fall within the range of speech, or between 80 and 10 000 Hz (see Figure 5.32). Sounds falling outside the range of speech must have higher intensity before we hear them as well. One feature that distinguishes an expensive sound system from a cheaper model is its ability to boost frequencies that fall outside our most sensitive range. Figure 5.32 Human Sensitivity to Sound. These functions plot the results of allowing participants to adjust the intensity of different tones until they sound equally loud. Each curve represents the intensity (dB) at which tones of each frequency match the perceived loudness of a model 1000 Hz tone. The stars indicate that a 100 Hz tone at 60 dB sounds about as loud as a 1000 Hz tone at 40 dB because they fall on the same line. Low frequencies are usually perceived as quieter than high frequencies at the same level of intensity. We are especially sensitive to frequencies found in speech.

© Pathdoc/ Shutterstock.com; © Cengage Learning Localization of Sound The pinna helps us localize sounds in the vertical plane, or in space above or below the head. Our primary method for localizing sound in the horizontal plane (in front, behind, and to the side) is to compare the arrival time of sound at each ear. As illustrated in Figure 5.33, the differences in arrival times are quite small, between 0 milliseconds for sounds that are directly in front of or behind us to 0.6 millisecond for sounds coming from a source perpendicular to the head on either side. Because arrival times for sounds coming from directly in front of or behind us are identical, it is difficult to distinguish these sources without further information. In addition to arrival times, we judge the differences in intensity of sounds reaching each ear. Because the head blocks some sound waves, a sound “shadow” is cast on the ear farthest from the source of sound. As a result, a weaker signal is received by this ear. Figure 5.33 Where Is That Sound Coming From? We localize sound to the left and right by comparing the differences between the arrival times of the sounds to our two ears.

© Cengage Learning Just as our visual systems can be fooled by certain types of input, our ability to localize sounds is influenced by interactions between vision and audition. Even before the invention of surround sound, which provides many effective sound localization cues, moviegoers perceived sound as originating from the actors’ lips, even though the speakers producing the sound are located above and to the sides of the screen. Our willingness to believe that the sound is coming from the actors’ lips probably results from our everyday experiences of watching people speak. The McGurk effect is an auditory illusion that occurs when we combine vision and hearing. In this demonstration, hearing “ba-ba” at the same time you see a person’s lips making “ga-ga” results in your perceiving “da-da.”

© Cengage Learning Auditory Groupings In our previous discussion of visual perception, we reviewed the grouping principles developed by Gestalt psychologists. Similar types of groupings occur in audition. Sounds from one location are grouped together because we assume they have the same source, whereas sounds identified as coming from different locations are assumed to have different sources. Sounds that start and stop at the same time are perceived as having the same source, while sounds with different starting and stopping times usually arise from separate sources, such as two voices in a conversation. Grouping plays an especially significant role in the perception of music and speech. In these cases, we see evidence of top-down processing as well, because our expectations for the next note or word influence our perceptions (Pearce, Ruiz, Kapasi, Wiggins, & Bhattacharya, 2010). Similarities between the processing of music and language have led researchers to argue for more music instruction in school to assist children with language learning (Strait, Kraus, Parbery-Clark, & Ashley, 2010).

5-3d Developmental and Individual Differences in Audition Hearing begins before birth and develops rapidly in human infants. Newborns as young as 2 days show evidence of recognizing their mother’s voice (DeCasper & Fifer, 1980) and respond preferentially to their native language (Moon, Cooper, & Fifer, 1993). Infants younger than 3 months show strong startle reactions to noise. By the age of 6 months, infants turn their heads in the direction of a loud or interesting sound. It is likely that their thresholds for sounds are nearly at adult levels by this age (Olsho, Koch, Halpin, & Carter, 1987). By the age of 1 year, children should reliably turn around when their name is called. An important developmental change in audition is age-related hearing loss. Hearing loss occurs first at higher frequencies. After the age of 30, most people cannot hear sounds above 15 000 Hz. After the age of 50, most people cannot hear above 12 000 Hz, and people older than 70 years have difficulty with sounds above 6000 Hz. Because speech normally ranges up to 8000 to 10 000 Hz, older adults might begin to have difficulty understanding the speech of others. Among individual differences in hearing is having perfect pitch, which means that you can name a musical tone that you hear. The brains of individuals with perfect pitch are structurally different from those of people who do not have this ability. Areas of the left hemisphere are larger in musicians with perfect pitch (Schlaug, Jancke, Huang, & Steinmetz, 1995). At the same time, extensive early musical training can shape the structure of the brain (Schlaug et al., 2009).

5-3e Sociocultural Influences on Auditory Perception Human culture and social life often provide a framework for the interpretation of stimuli. A dramatic example of this type of influence is our reaction to sine wave speech. To produce this stimulus, scientists artificially alter recordings of speech to resemble regular, repeating sine waves, as shown in Figure 5.34 (Davis, 2007). When people hear these artificial sounds without further instructions, they describe them as tweeting birds or other nonlanguage stimuli. However, if people are told the sounds represent speech, they suddenly “hear” language elements (Remez, Rubin, Pisoni, & Carell, 1981). Figure 5.34 Expectations Influence the Interpretation of Sine Waves. Sine waves are regular and repetitive waveforms, such as the ones we included earlier to show how the height and frequency of light and sound waves are interpreted by the mind. Researchers can record speech sounds and transform the recordings into artificial sine waves, such as those in this image. If the sounds are played without information about their source, most people interpret the sounds as tweeting birds. However, if people are told that the recordings are language, they report “hearing” language, another example of top-down cognitive influences on perception.

Source: © Remez, R. E. (1998). Sine-wave speech. Scholarpedia, 3: 2394 For native English speakers, the sounds /r/ and /l/ are easy to distinguish from one another, and it may be difficult to imagine how someone might get these sounds confused. But for native Japanese speakers, hearing and producing English /r/ and /l/ can be extremely difficult. Even native speakers of Japanese who are comfortable with conversational English and who have lived in an English-speaking country for an extended period of time have difficulty perceiving the acoustic difference between English /r/ and /l/ (Goto, 1971). Similarly, native English speakers have difficultly perceiving sounds that are not typically differentiated in their own language, such as the different /p/ sounds used in the Hindi words for fruit and moment, respectively written in the Latin-alphabet as phal (where the p is pronounced with an aspiration, or puff of air, behind it) and pal (where the p is pronounced with no aspiration). Sine wave speech shows us how culture in the form of experience with language can shape perception, but in other instances, perception can shape culture. For many people with hearing loss and for their families and friends, being deaf means something other than having a disability. Instead, deafness is viewed as a culture, complete with its own set of attitudes, language, and norms. American Sign Language (ASL) originated the northeastern United States in the 1800s, and is the predominant sign language used in North America (though dialects of it are used around the world). It is also a form of sign language commonly learned by hearing individuals. While it is sometimes believed that ASL is simply a “signed” version of English, this is far from the truth—ASL is its own unique language, with its own grammar, sentence structure, and vocabulary. As a result, ASL is difficult for signing people in Great Britain and Australia to understand (Mindess, 2006). Diverse Voices in Psychology Combining Sight with Sound to Help People Learn Indigenous Languages Proper Pronunciation is an important and challenging aspect of learning any new language. In the case of many Indigenous languages, traditional pronunciations have been influenced by the dominant language of the community (e.g., English). As discussed in Chapter 10, many Indigenous languages are in danger of becoming extinct. In areas such as British Columbia, the majority of Indigenous language speakers learn an Indigenous language as a second language, and there is growing concern that traditional Indigenous ways of speaking will not be passed along to future generations (Bird & Kell, 2017). Screenshot from an ultrasound overlay video (this one was developed for people learning Japanese).

Courtesy of Bryan Gick, eNunciate! In order to help English speakers learn the endangered Salish languages of SENĆOŦEN, Halq’emeylem, and Secwepemc, linguists and community members in British Columbia have developed a series of multimedia resources to aid in pronunciation (Bliss, Bird, Cooper, Burton, & Gick, 2018). These resources focus on the pronunciation of sounds that do not exist in the English language and that require particular movements of the tongue that English speakers are not accustomed to making. When listening to someone speak the language, it is of course not possible to actually see the movement their tongue is making in order to produce the sound. One solution is to create videos that include ultrasound depictions of tongue movements, allowing the learner to see (and then mimic) what is going on inside the speaker’s mouth. Because ultrasound videos on their own can be difficult for novices to discern (as anyone who has had a baby can attest to), the language instruction videos take the ultrasound footage of the tongue and place it on top of normal video of the person speaking (as depicted in the image above). Research examining Cantonese language learners has shown that use of the ultrasound overlay videos improves learners’ perception and production of Cantonese compared to audio-only instruction (Bliss, Cheng, Schellenberg, Pai, Lam, & Gick, 2017). In the case of endangered Indigenous languages, creating videos of first language speakers also enables more language learners to actually listen to and learn from a first speaker of the language. Creating a library of videos also helps reduce some of the burden that may otherwise be placed on first speakers of the language as they try to keep the language alive (Bliss et al., 2018).

5-4 How Do We Feel Body Position, Touch, Temperature, and Pain? Somatosensation (soma comes from the Greek word for “body”) provides us with information about the position and movement of our bodies, along with touch, skin temperature, and pain. Although these senses may not seem as glamorous as vision and hearing, we are severely disabled by their loss. You might think it would be a blessing to be born without a sense of pain, but people who have impaired pain reception often die prematurely because of their inability to respond to injury. Although unpleasant, pain tells us to stop and assess our circumstances, which might have promoted the survival of our ancestors.

5-4a Somatosensory Stimuli Unlike the visual and auditory stimuli we have discussed so far in this chapter, somatosensory stimuli arise from within the body or make contact with its surface. As a result, these stimuli provide an organism little time to react. We can deal with a predator seen or heard from a distance using strategies different from those we use for one that is touching us. Nonetheless, the somatosenses provide essential feedback needed for movement, speech, and safety.

5-4b The Biology of the Somatosenses The transition from walking on four legs to walking on two placed selective pressure on the evolution of primate vision and, to some extent, audition. By standing up on two legs, primates distanced themselves from many sources of information, such as smell. If you don’t believe us, try getting down on your hands and knees and smelling your carpet. This transition did not place the same evolutionary pressure on the human somatosenses, which work about the same way in us as they do in other animals. Body Position To begin our exploration of the somatosensory systems, we return to the inner ear. Adjacent to the structures responsible for encoding sound, we find the structures of the vestibular system, which provide us with information about body position and movement. The proximity of these structures to the middle ear, which can become congested because of a head cold, is often responsible for those rather unpleasant feelings of dizziness that accompany an illness. The receptors of the vestibular system provide information about the position of the head relative to the ground, linear acceleration, and rotational movements of the head. We sense linear acceleration when our rate of movement changes, such as when our airplane takes off. Like the cochlea, the vestibular receptors contain sensitive hair cells that are bent back and forth within their surrounding fluid when the head moves. When extensive movement stops suddenly, perhaps at the end of an amusement park ride, these fluids reverse course. You may have the odd sensation that your head is now moving in the opposite direction, even though you are sitting or standing still. The movement of these hair cells results in the production of signals in the auditory nerve, the same nerve that carries information about sound. These axons form connections in the medulla and in the cerebellum. You may recall from Chapter 4 that the cerebellum participates in balance and motor coordination, functions that depend on feedback about movement. In turn, the medulla receives input from the visual system, the cerebellum, and other somatosenses. This arrangement provides an opportunity to coordinate input from the vestibular system with other relevant information. The medulla forms connections directly with the spinal cord, allowing us to adjust our posture to keep our balance. Vestibular information travels from the medulla to the thalamus, the primary somatosensory cortex of the parietal lobe, and then the primary motor cortex in the frontal lobe. This pathway allows vestibular information to guide voluntary movement. The vestibular system helps us maintain a steady view of the world, even when riding the most extreme roller coaster.

Peter Mumford/Alamy Stock Photo In humans particularly, information from the vestibular system is tightly integrated with visual processing. As we move, it is essential that we maintain a stable view of our surroundings. To accomplish this task, rotation of the head results in a reflexive movement of the eyes in the opposite direction. This action should allow you to maintain a steady view of the world, even on the most extreme roller coaster. Touch Touch provides a wealth of information about the objects around us. By simply exploring an object with touch, we can determine features such as size, shape, texture, and consistency. These judgments confirm and expand the information we obtain about objects through visual exploration. Touch is not only a means of exploring the environment. Particularly in humans, touch plays a significant role in social communication. Infants who are touched regularly sleep better, remain more alert while awake, and reach cognitive milestones at earlier ages (Ackerman, 1990). We hug our friends and loved ones to provide comfort, pat others on the back for a job well done, and shake hands to greet a colleague or conclude a deal. The contributions of the sense of touch to human sexuality are obvious. Our sense of touch begins with skin, the largest and heaviest organ in the human body. Embedded within the skin are several types of specialized neurons that produce action potentials whenever they are physically bent or stretched. Different types of receptors respond to certain features of a touch stimulus, such as pressure, vibration, or stretch (see Figure 5.35). In addition to their locations in the skin, receptors are located in blood vessels, joints, and internal organs. Unpleasant sensations from a headache or a too-full stomach or bladder originate from some of these receptors. Some receptor fibres wrap around hair follicles and respond whenever a hair is pulled or bent. Others, as we will see later in this section, participate in our senses of pain and skin temperature. Figure 5.35 Touch Receptors. Different receptors in the skin help us sense pressure, vibration, stretch, or pain.

© Cengage Learning Information about touch travels from the skin to the spinal cord. Once inside the spinal cord, touch pathways proceed to the thalamus, along with input from the cranial nerves originating in the touch receptors in the skin of the face, the mouth, and the tongue. The thalamus transmits touch information to the primary somatosensory cortex, located in the parietal lobe. A map of the body’s representation in the primary somatosensory cortex, or a sensory homunculus (“little man”), is shown in the statue to the right. This odd figure demonstrates how areas of the body are represented based on their sensitivity rather than their size. The sensory homunculus, and much of our knowledge of how the human body is represented in the brain, comes from Wilder Penfield at the Montreal Neurological Institute (who you may recall from Chapter 1) and his studies on patients with epilepsy. Different species show different patterns of cortical organization for touch. Humans need sensitive feedback from the lips and the hands to speak and make skilled hand movements for tool use and other tasks. Rats devote a great deal of cortical real estate to whiskers, whereas lips have a high priority in squirrels and rabbits. The sensory homunculus illustrates the amount of representation each part of the body has in the sensory cortex. The human homunculus emphasizes the hands and face.

The Natural History Museum/The Image Works A notable area that is missing from the homunculus is the brain, which has neither touch receptors nor pain receptors. We can only assume that for much of evolutionary history, intrusion into the brain was likely to be fatal. Consequently, there would be no advantage to “feeling” your brain. Because of the lack of somatosensation in the brain, neurosurgeons can work with an alert patient using local anesthesia for the skull and tissues overlying the brain. The surgery produces no sensations of pressure or pain. You may have also noticed that the homunculus is male—there are no female sex organs represented in the homunculus. Although Penfield included women in his research studies, he did not discuss women or describe any potential sex differences in his reports. Because there are good reasons to believe that male and female somatosensory representations may differ (e.g., because female sex organs are internal to a greater extent than males), a number of contemporary researchers have argued for the production of a “hermunculus”—a complete map of the female body in the brain, which currently does not exist (Di Noto, Newman, Wal, & Einstein, 2013). Advances in robotics combined with better understanding of how touch is processed in the brain are leading to the development of prosthetics that can feel. Using fMRI, researchers were able to map areas of the sensory cortex that reacted when a participant imagined something touching different parts of the hand. With electrodes implanted in the relevant areas, the participant could then respond accurately to touch applied to the prosthetic hand, even when blindfolded. With this more natural feedback, the prosthetic hand should be able to manage delicate tasks, like picking up an egg.

H.S. Photos/Alamy Stock Photo The representation of touch in the primary sensory cortex is plastic, which means that it changes in response to increases or decreases in input from a body part. Many individuals who lose a body part experience a phenomenon known as phantom limb, a term first used by a Civil War physician to describe his patients’ experience of pain from a missing limb. Phantom sensations can result from the reorganization of the somatosensory cortex following the loss of a body part (Borsook et al., 1998). In one case study, touching different parts of a patient’s face produced “feeling” from the patient’s missing hand (Ramachandran & Rogers-Ramachandran, 2000). When his cheek was touched, he reported feeling his missing thumb, along with the expected cheek, while touching his lip elicited feeling from the missing index finger, along with the normal lip sensations. In an even more bizarre example, a patient was embarrassed to report that he experienced a sensation of orgasm in his missing foot. Increased input also changes the organization of the somatosensory cortex. When monkeys were trained to use specific fingers to discriminate among surface textures to obtain food rewards, the areas of the cortex responding to the trained fingertips expanded (Merzenich & Jenkins, 1993). A similar reorganization occurs when blind individuals learn to read Braille (Pascual-Leone & Torres, 1993) or when people train extensively on stringed musical instruments (Elbert, Pantev, Weinbruch, Rockstroh, & Taub, 1995). Using your thumbs for text messaging will probably result in adaptations in cortical representation not seen in older generations (Wilton, 2002). The representation of body parts in the primary sensory cortex changes in response to the amount of input from a body part. Children who study stringed instruments show more space in the sensory cortex devoted to fingers.

wavebreakmedia/ Shutterstock.com Individuals with autism spectrum disorder (ASD) experience a very different sensory world (see Chapter 14). Many individuals with ASD are oversensitive to touch, leading to rejection of hugs and cuddling. In addition, brain responses to touch of self or others differ between individuals with ASD and healthy controls (Deschrijver, Wiersema, & Brass, 2017). The extent of the differences correlated with the individuals’ reports of sensory and social difficulties. Pain Given the anguish experienced by patients with chronic pain, it is tempting to think that not having a sense of pain would be wonderful. However, as mentioned earlier, we need pain to remind us to stop when we are injured, to assess the situation before proceeding, and to allow the body time to heal. Free nerve endings that respond to pain are triggered by a number of stimuli associated with tissue damage. Some pain receptors respond to mechanical damage, such as that caused by a sharp object, while others respond to temperature or chemicals. Among the chemicals that stimulate pain receptors is capsaicin, an ingredient found in hot peppers (Caterina et al., 1997). Information about pain is carried centrally to the brain by two types of fibres. Fast, myelinated axons are responsible for that sharp “ouch” sensation that often accompanies an injury. Slower, unmyelinated axons are responsible for dull, aching sensations. Ashlyn Blocker was born with a rare condition preventing her from feeling pain. Without complaint, she went several days with a broken ankle after falling off her bicycle.

AP Images/Stephen Morton Pain fibres from the body form synapses with cells in the spinal cord, which in turn sends pain messages to the thalamus. This information takes a relatively direct route, with only one synapse in the spinal cord separating the periphery of the body and the thalamus in the forebrain. This arrangement ensures that pain messages are received by the brain with great speed. From the thalamus, pain information is sent to the anterior cingulate cortex and the insula, which manage the emotional qualities of pain, and to the somatosensory cortex in the parietal lobe, which manages information about the location and intensity of pain (Wiech, 2016). The perception of pain is a complex process. While one might assume that increasing levels of damage (and subsequent sensory activation) would lead to increases in perceived pain, this is not always the case. As any parent knows, a child who falls down and gets hurt while reluctantly walking to school is likely to have a very different perception of that experience than the same child who falls down while running around at a birthday party. In 1965, Canadian psychologist Robert Melzack and British neuroscientist Patrick Wall revolutionized the understanding of pain with their publication of the gate control theory of pain. This theory helps explain the various psychological and physical factors that contribute to pain perception, by proposing that rather than pain signals being automatically sent to the brain, these signals must first pass through neurological gates at the spinal cord. If the gate is open, pain signals travel to the brain and are perceived; if the gate is closed, it’s possible that the pain may not be perceived at all. Factors that may close the gate to pain include psychological factors (e.g., a child too excited by a birthday party to let a skinned knee bother them or a soldier in such a state of arousal that they fail to notice their injuries until the battle is over). Pain messages travelling to the brain may also be modified by competing incoming sensory signals. Many of us spontaneously rub our elbow after bumping it painfully. According to this model, input from touch fibres (reacting to rubbing your elbow) competes with input from pain receptors for activation of cells in the spinal cord (see Figure 5.36). Activation of the touch fibres effectively dilutes the amount of pain information reaching the brain. Figure 5.36 The Gate Control Theory of Pain. According to the gate control theory, incoming pain messages can be influenced by factors such as chronic stress (opening the gate wider and producing a greater sensation of pain) or rubbing an injured body part (closing the gate and reducing the sensation of pain).

Enlarge Image

Juriah Mosin/Shutterstock.com The perception of pain is affected by the descending influence of higher brain centres. Many forebrain structures form connections with the periaqueductal grey of the midbrain. As we observed in Chapter 4, this area is rich in receptors for our natural opioids, the endorphins. The periaqueductal grey is a major target for opioid painkillers, such as morphine. Electrical stimulation of the periaqueductal gray produces a significant reduction in the experience of pain. Culture, context, and experience can shape our perception of pain. During a festival dedicated to penance and atonement, Tamil Hindus walked through the streets carrying devices called kavadis that hold hooks that are pierced through the skin. Without this cultural context, it is likely that most people would find this experience excruciatingly painful.

Louise Batalla Duran/Alamy Stock Photo Pain is an actively constructed experience that involves our expectations and past experiences (Wiech, 2016). The power of expectation can be seen in placebo effects, which occur when people experience pain reduction, even though they have been exposed to an ineffective substance or treatment, such as a sugar pill instead of an aspirin tablet. Traditionally, scientists thought placebo effects were due to the ability of people’s belief that they are being treated for pain to initiate a real decrease in pain sensation. However, even when people are told they are receiving a placebo, pain relief can occur as long as they are also told that placebo effects can be powerful (Carvalho et al., 2016).