“Peter Speaks to the People about Christ,” engraved print by Flemish artist Philip Galle (1537–1612), after Maerten van Heemskerck. National Gallery of Art, Washington, D.C.
Where Science Meets Religion by Trent Dee Stephens, PhD, for the Come Follow Me lesson July 3–9: Acts 1–5
In Acts 2:4-12 we read that on the day of Pentecost, the apostles, “…were all filled with the Holy Ghost, and began to speak with other tongues, as the Spirit gave them utterance. And there were dwelling at Jerusalem Jews, devout men, out of every nation under heaven. Now when this was noised abroad, the multitude came together, and were confounded, because that every man heard them speak in his own language. And they were all amazed and marvelled, saying one to another, Behold, are not all these which speak Galilæans? And how hear we every man in our own tongue, wherein we were born? Parthians, and Medes, and Elamites, and the dwellers in Mesopotamia, and in Judæa, and Cappadocia, in Pontus, and Asia, Phrygia, and Pamphylia, in Egypt, and in the parts of Libya about Cyrene, and strangers of Rome, Jews and proselytes, Cretes and Arabians, we do hear them speak in our tongues the wonderful works of God. And they were all amazed, and were in doubt, saying one to another, What meaneth this?”
In a more modern context, perhaps the most famous example of speaking in tongues occurred immediately after Karl G. Maeser’s baptism 14 October 1855 in the Elbe River. Karl was 27 years old. He recounted the event, “We walked home together, President [Franklin D.] Richards [a member of the Quorum of the Twelve Apostles, who presided over the European Mission] and Elder [William] Budge at the right and the left of me, while the other three men walked some distance behind us, so as to attract no notice. . . . Our conversation was on the subject of the authority of the Priesthood, Elder Budge acting as interpreter. Suddenly I stopped Elder Budge from interpreting President Richards’ remarks, as I understood them, and replied to him in German, when again the interpretation was not needed as President Richards understood me also.” President Richards said of the experience, “Brother Maeser did not know English and I did not know German, but I could speak with him and he with me.” (Richards, A. LeGrand, Moritz Busch’s Die Mormonen and the Conversion of Karl G. Maeser, BYU Studies Quarterly 45 (4): 47-68, 2006; byustudies.byu.edu/article/moritz-buschs-die-mormonen-and-the-conversion-of-karl-g-maeser; Retrieved 1 July 2023)
After coming to Utah, Maeser had a huge impact on the education of generations of members of The Church of Jesus Christ of Latter-day Saints. He is considered to be the founder of Brigham Young Academy – later to become Brigham Young University, and served as president of the Academy for 16 years. As an undergraduate at BYU, I passed, several times a day, by Maeser’s statue in front of the Carl F. Eyring Science Center on the BYU campus.
Where, between the mind of the speaker and the mind of the listener, does “speaking in tongues” occur? When one person speaks and another person listens and understands, there are numerous steps in that process. First, the person speaking uses his or her larynx and mouth to produce vibrations of the air, resulting in sound waves unique to the specific language and to the specific person speaking. Those sound waves strike the tympanic membrane, of the person, or people listening, causing it/them to vibrate. The handle of a tiny bone, called the malleus (mallet, hammer), is attached to the inner surface of the tympanic membrane—so that when that membrane vibrates, so does the malleus. The head of the malleus is attached by tiny ligaments to a second, smaller bone, called the incus (anvil), which, in turn, is attached to an even smaller bone called the stapes (stirrup). The foot plate of the stapes occupies an oval window in the bony wall between the middle and inner ears, and is held in place by a flexible annular ligament, which vibrates when the stapes vibrates. Because of the relative size decrease across the three bones of the air-filled-space in the middle ear (between the tympanic membrane and the stapes foot plate and annular ligament), the volume is mechanically magnified twenty-fold.
On the inner side of the foot plate and annular ligament is a fluid-filled space called the inner ear. Vibration of the stapes’ foot plate causes waves in the fluid in the inner ear. The part of the inner ear involved in hearing (another part of the inner ear is involved in balance) is coiled like a snail shell and is called the cochlea. The cochlea is split into three separate, fluid-filled tubes by two thin membranes. One of those membranes, called the basilar membrane, is flexible and vibrates in response to the fluid waves. Attached to the surface of the basilar membrane are the bases of thousands of cells, called hair cells.
Stereocilia on the apex of each hair cell are embedded into another, thicker, non-vibrating membrane; called the tectoral membrane. As the basilar membrane vibrates, the stereocilia bend, which causes chemicals in the base of the cell, called neurotransmitters, to be released. The basilar membrane and hair cells are organized in such a way that high-pitched sounds cause vibration in the basilar membrane mainly at the base of the cochlea, whereas low-pitched sounds cause vibration mainly at the apex. The inner hair cells of the basilar membrane respond to loud sounds and outer hair cells respond more to softer sounds. As a result, the speech patterns generated by the larynx and mouth of the speaker are replicated as mechanical vibrations in specific hair cells.
The neurotransmitters released by the hair cells cross a tiny, fluid-filled gap and attach to specific receptors on neurons (nerve cells) that comprise part of the vestibule-cochlear nerve (Cranial Nerve VIII – the balance-hearing nerve). Those neurotransmitters trigger action potentials (like tiny electric sparks), which travel along the vestibule-cochlear neurons to a relay center in the brainstem, called the cochlear nucleus (nucleus means pit—like a cherry pit—and each nucleus in the brain is made up of hundreds of neurons) where they release their own neurotransmitters. Neurons in the cochlear nucleus, in turn, conduct action potentials to the next relay center in the thalamus (the name means a bedroom in the prow of a ship and describes the location of the thalamus in the brain; this may be thought of as the master relay center of the brain). Neurons in the thalamus project to the auditory cortex in the temporal lobe of the brain – where the sound produced by the speaker is actually “heard,” be it English, Spanish, or Chinese.
Neurons in the auditory cortex transmit action potentials to a near-by part of the cortex called Wernicke’s area (named for Carl Wernicke, who first described its function; also called the sensory speech area)—where we interpret the action potentials from the auditory cortex. Neurons in Wernicke’s area interact with neurons in other cortical regions of the brain in translating/comprehending what was said.
So where along this complex pathway may “speaking in tongues and/or interpretation of tongues” occur? In modern multilingual meetings, such as occur at the United Nations, there are translators who translate in the organization’s six official languages: Arabic, Chinese, English, French, Spanish, and Russian. Each translator hears and comprehends what the speaker is saying in one of those languages and then repeats those words translated into one of the other of those official languages. Participants who do not understand the language spoken wear headphones so they can listen to the translator speaking in their own language. Therefore, the “interpretation of tongues” is occurring before the sound waves reach the tympanic membrane. If a speaker does not speak in one of the six official language—either as a political statement or because he or she does not know one—the speaker is required to bring his or her own translators.
Computer-based digital translators have been developed recently—such as the Google Pixel Buds, which, together with your smart phone, can translate what is being said in a given language, say German, into the English you can understand. Again, this is occurring before the spoken word reaches your eardrum. The Pixel Buds can also broadcast what you are saying in English back to the person in German. This system seems to work well if you are traveling in a foreign country—and the conversation is simple and spoken slowly. However, as yet, the system isn’t very good at translating rapid and/or complex conversations.
Is it possible that on the day of Pentecost, nearly 2000 years ago, God, in some way, changed the sound waves coming from the apostles’ mouths so that when those modified waves struck the tympanic membranes of the Jews from “every nation under heaven,” each person understood in his or her own language—in much the same way as occurs in the UN with modern interpretive headphones? Is it possible that a similar modification of sound waves occurred during the conversation between Karl G. Maeser and Franklin D. Richards? Perhaps.
However, it is my opinion that “speaking in tongues” occurs closer to the terminal end of speech comprehension. It doesn’t matter if someone is speaking to you in Arabic, Chinese, English, French, Spanish, or Russian; the sound waves and action potentials initiated by those sound waves all reach the auditory cortex in the same manner. You hear the person speaking Russian just as well as you do someone speaking in English. Furthermore, presumably, the action potentials from the auditory cortex to Wernicke’s area are also the same. It’s in Wernicke’s area where the difference resides. Neurons in Wernicke’s area “reach out” to neurons in other parts of the brain where memory is stored, such as what’s called the sensory cortex in the parietal lobes of the brain. Those neurons are “asking” “have I ever heard this particular set of sound waves before, and if so, do I know what they mean?” Perhaps, with a few, fleeting words, the answer may come back “yes”—such as when you hear the word sept or siete—but most foreign words are coming into Wernicke’s area so fast that you may not even be able to comprehend somewhat familiar words.
That’s where I think God can intervene. He told Oliver Cowdery, in the 8th section of the Doctrine and Covenants (v. 2-3), “Yea, behold, I will tell you in your mind and in your heart, by the Holy Ghost, which shall come upon you and which shall dwell in your heart. Now, behold, this is the spirit of revelation; behold, this is the spirit by which Moses brought the children of Israel through the Red Sea on dry ground.” Telling a person in their mind sounds a lot like some form of broadcast directly to Wernicke’s area. Telling us in our hearts is, of course, a metaphor, describing our feelings as we are touched by the Holy Ghost.
Joseph Smith said in Doctrine and Covenants 85:6, “Yea, thus saith the still small voice, which whispereth through and pierceth all things, and often times it maketh my bones to quake while it maketh manifest…” During the twenty-five years I taught neurobiology at ISU, when discussing the visual and auditory cortices, I taught about revelation—both visual and auditory. I pointed out that we have radio waves passing through our brains every second of every day but we do not usually hear them. However, we can build a very simple crystal radio with an amazingly small number of key components, which can pick up those radio waves and translate them to sound waves in our ears. In addition, some people who have braces inserted into their mouths can pick up radio waves through them, and can hear music playing. It appears, therefore, that our brains are already very close to being able to pick up broadcast radio waves. It seems a simple process for God to send out radio waves that Wernicke’s area can comprehend in a multitude of languages—especially during Pentecost 33 AD, when the spirit in Jerusalem was already thick enough to be cut with a knife.
Trent Dee Stephens
trentdeestephens.com
Comments