Peculiarities of human sound perception. Perception of sound by the human ear What determines the perception of sound

A person perceives sound through the ear (Fig.).

There is a sink located outside outer ear , passing into the auditory canal with a diameter D 1 = 5 mm and length 3 cm.

Next is the eardrum, which vibrates under the influence of a sound wave (resonates). The membrane is attached to the bones middle ear , transmitting vibration to another membrane and further to the inner ear.

Inner ear looks like a twisted tube (“snail”) with liquid. The diameter of this tube D 2 = 0.2 mm length 3 – 4 cm long.

Since air vibrations in a sound wave are weak to directly excite the fluid in the cochlea, the system of the middle and inner ear, together with their membranes, play the role of a hydraulic amplifier. The area of ​​the eardrum of the inner ear is smaller than the area of ​​the membrane of the middle ear. The pressure exerted by sound on the eardrums is inversely proportional to the area:

.

Therefore, the pressure on the inner ear increases significantly:

.

In the inner ear, another membrane (longitudinal) is stretched along its entire length, hard at the beginning of the ear and soft at the end. Each section of this longitudinal membrane can vibrate at its own frequency. In the hard section, high-frequency oscillations are excited, and in the soft section, low-frequency oscillations are excited. Along this membrane is the vestibulocochlear nerve, which senses vibrations and transmits them to the brain.

Lowest vibration frequency of a sound source 16-20 Hz is perceived by the ear as a low bass sound. Region highest hearing sensitivity captures part of the mid-frequency and part of the high-frequency subranges and corresponds to the frequency range from 500 Hz before 4-5 kHz . The human voice and the sounds produced by most of the processes in nature that are important to us have a frequency in the same interval. In this case, sounds with frequencies ranging from 2 kHz before 5 kHz heard by the ear as a ringing or whistling sound. In other words, the most important information is transmitted at audio frequencies up to approximately 4-5 kHz.

Subconsciously, a person divides sounds into “positive”, “negative” and “neutral”.

Negative sounds include sounds that were previously unfamiliar, strange and inexplicable. They cause fear and anxiety. These also include low-frequency sounds, for example, a low drumbeat or the howl of a wolf, as they arouse fear. In addition, fear and horror are aroused by inaudible low-frequency sounds (infrasound). Examples:

    In the 30s of the 20th century, a huge organ pipe was used as a stage effect in one of the London theaters. The infrasound of this pipe made the entire building tremble, and terror settled in the people.

    Employees of the National Physics Laboratory in England conducted an experiment by adding ultra-low (infrasound) frequencies to the sound of conventional acoustic instruments of classical music. The listeners felt a decline in mood and experienced a feeling of fear.

    At the Department of Acoustics of Moscow State University, studies were carried out on the influence of rock and pop music on the human body. It turned out that the frequency of the main rhythm of the composition “Deep People” causes uncontrollable excitement, loss of control over oneself, aggressiveness towards others or negative emotions towards oneself. The song "The Beatles", at first glance euphonious, turned out to be harmful and even dangerous, because it has a basic rhythm of about 6.4 Hz. This frequency resonates with the frequencies of the chest, abdominal cavity and is close to the natural frequency of the brain (7 Hz.). Therefore, when listening to this composition, the tissues of the abdomen and chest begin to ache and gradually collapse.

    Infrasound causes vibrations in various systems in the human body, in particular the cardiovascular system. This has adverse effects and can lead, for example, to hypertension. Oscillations at a frequency of 12 Hz can, if their intensity exceeds a critical threshold, cause the death of higher organisms, including humans. This and other infrasound frequencies are present in industrial noise, highway noise and other sources.

Comment: In animals, the resonance of musical frequencies and natural frequencies can lead to the breakdown of brain function. When "metal rock" sounds, cows stop giving milk, but pigs, on the contrary, adore metal rock.

The sounds of a stream, the tide of the sea or birdsong are positive; they induce calm.

Besides, rock isn't always bad. For example, country music played on a banjo helps to recover, although it has a bad effect on health at the very beginning of the disease.

Positive sounds include classical melodies. For example, American scientists placed premature infants in boxes to listen to the music of Bach and Mozart, and the children quickly recovered and gained weight.

Bell ringing has a beneficial effect on human health.

Any sound effect is enhanced in twilight and darkness, since the proportion of information received through vision decreases

        Sound absorption in air and enclosing surfaces

Absorption of sound in air

At each moment of time at any point in the room, the sound intensity is equal to the sum of the intensity of direct sound directly emanating from the source and the intensity of sound reflected from the enclosing surfaces of the room:

When sound propagates in atmospheric air and in any other medium, intensity losses occur. These losses are due to the absorption of sound energy in the air and enclosing surfaces. Let's consider sound absorption using wave theory .

Absorption sound is the phenomenon of irreversible transformation of the energy of a sound wave into another type of energy, primarily into the energy of thermal motion of particles of the medium. Sound absorption occurs both in the air and when sound is reflected from enclosing surfaces.

Absorption of sound in air accompanied by a decrease in sound pressure. Let the sound travel along the direction r from the source. Then depending on the distance r relative to the sound source, the sound pressure amplitude decreases according to exponential law :

, (63)

Where p 0 – initial sound pressure at r = 0

,

 – absorption coefficient sound. Formula (63) expresses law of sound absorption .

Physical meaning coefficient is that the absorption coefficient is numerically equal to the reciprocal of the distance at which the sound pressure decreases in e = 2,71 once:

SI unit:

.

Since the sound strength (intensity) is proportional to the square of the sound pressure, then the same law of sound absorption can be written as:

, (63*)

Where I 0 – sound strength (intensity) near the sound source, i.e. at r = 0 :

.

Dependency graphs p sound (r) And I(r) are presented in Fig. 16.

From formula (63*) it follows that for the sound intensity level the equation is valid:

.

. (64)

Therefore, the SI unit of absorption coefficient is: neper per meter

,

In addition, it can be calculated in belah per meter (b/m) or decibels per meter (dB/m).

Comment: Sound absorption can be characterized loss factor , which is equal

, (65)

Where – sound wavelength, product  l ogarithmic attenuation coefficient sound. A value equal to the reciprocal of the loss coefficient

,

called quality factor .

There is no complete theory of sound absorption in air (atmosphere) yet. Numerous empirical estimates give different values ​​for the absorption coefficient.

The first (classical) theory of sound absorption was created by Stokes and is based on taking into account the influence of viscosity (internal friction between layers of a medium) and thermal conductivity (temperature equalization between layers of a medium). Simplified Stokes formula has the form:

, (66)

Where air viscosity, Poisson's ratio, 0 air density at 0 0 C, speed of sound in air. For normal conditions, this formula will take the form:

. (66*)

However, Stokes formula (63) or (63*) is valid only for monatomic gases whose atoms have three translational degrees of freedom, i.e., when =1,67 .

For gases of 2, 3 or polyatomic molecules meaning significantly more, since sound excites rotational and vibrational degrees of freedom of molecules. For such gases (including air), the formula is more accurate

, (67)

Where T n = 273.15 K – absolute temperature of ice melting (triple point), p n = 1,013 . 10 5 Pa – normal atmospheric pressure, T And p– real (measured) temperature and atmospheric pressure, =1,33 for diatomic gases, =1,33 for tri- and polyatomic gases.

Sound absorption by enclosing surfaces

Sound absorption by enclosing surfaces occurs when sound is reflected from them. In this case, part of the energy of the sound wave is reflected and causes the appearance of standing sound waves, and the other energy is converted into the energy of thermal motion of the obstacle particles. These processes are characterized by the reflection coefficient and absorption coefficient of the enclosing structure.

Reflection coefficient sound from an obstacle is dimensionless quantity equal to the ratio of the part of the wave energyW negative , reflected from the obstacle, to the entire energy of the waveW pad falling on an obstacle

.

Sound absorption by an obstacle is characterized by absorption coefficient dimensionless quantity equal to the ratio of the part of the wave energyW absorbing engulfed by an obstacle(and transformed into the internal energy of the barrier substance), to all wave energyW pad falling on an obstacle

.

Average absorption coefficient sound by all enclosing surfaces is equal

,

, (68*)

Where i sound absorption coefficient of the material i th obstacle, S i – area i th obstacles, S– total area of ​​obstacles, n- number of different obstacles.

From this expression we can conclude that the average absorption coefficient corresponds to a single material that could cover all surfaces of the room’s barriers while maintaining total sound absorption (A ), equal

. (69)

Physical meaning of total sound absorption (A): it is numerically equal to the sound absorption coefficient of an open opening with an area of ​​1 m2.

.

The unit of sound absorption is called sabin:

.

Having considered the theory of propagation and the mechanisms by which sound waves arise, it is useful to understand how sound is “interpreted” or perceived by humans. A paired organ, the ear, is responsible for the perception of sound waves in the human body. Human ear- a very complex organ that is responsible for two functions: 1) perceives sound impulses 2) acts as the vestibular apparatus of the entire human body, determines the position of the body in space and provides the vital ability to maintain balance. The average human ear is capable of detecting vibrations of 20 - 20,000 Hz, but there are deviations up or down. Ideally, the audible frequency range is 16 - 20,000 Hz, which also corresponds to 16 m - 20 cm wavelength. The ear is divided into three components: the outer, middle and inner ear. Each of these “divisions” performs its own function, but all three divisions are closely connected with each other and actually transmit sound waves to each other.

External (outer) ear

The outer ear consists of the pinna and the external auditory canal. The auricle is an elastic cartilage of complex shape, covered with skin. At the bottom of the auricle there is a lobe, which consists of fatty tissue and is also covered with skin. The auricle acts as a receiver of sound waves from the surrounding space. The special shape of the structure of the auricle makes it possible to better capture sounds, especially the sounds of the mid-frequency range, which is responsible for the transmission of speech information. This fact is largely due to evolutionary necessity, since a person spends most of his life in oral communication with representatives of his species. The human auricle is practically motionless, unlike a large number of representatives of the animal species, which use ear movements to more accurately tune to the sound source.

The folds of the human auricle are designed in such a way that they introduce corrections (minor distortions) regarding the vertical and horizontal location of the sound source in space. It is due to this unique feature that a person is able to quite clearly determine the location of an object in space relative to himself, guided only by sound. This feature is also well known under the term "sound localization". The main function of the auricle is to catch as many sounds as possible in the audible frequency range. The further fate of the “caught” sound waves is decided in the ear canal, the length of which is 25-30 mm. In it, the cartilaginous part of the external auricle passes into the bone, and the skin surface of the auditory canal is endowed with sebaceous and sulfur glands. At the end of the ear canal there is an elastic eardrum, to which vibrations of sound waves reach, thereby causing its response vibrations. The eardrum, in turn, transmits these resulting vibrations to the middle ear.

Middle ear

Vibrations transmitted by the eardrum enter an area of ​​the middle ear called the “tympanic region.” This is an area with a volume of about one cubic centimeter in which three auditory ossicles are located: malleus, incus and stapes. It is these “intermediate” elements that perform the most important function: transmitting sound waves to the inner ear and simultaneously amplifying them. The auditory ossicles represent an extremely complex chain of sound transmission. All three bones are closely connected to each other, as well as to the eardrum, due to which vibrations are transmitted “along the chain”. On the approach to the area of ​​the inner ear there is a window of the vestibule, which is blocked by the base of the stapes. To equalize the pressure on both sides of the eardrum (for example, in case of changes in external pressure), the middle ear area is connected to the nasopharynx via the Eustachian tube. We are all familiar with the effect of stuffy ears, which occurs precisely because of such fine tuning. From the middle ear, sound vibrations, already amplified, enter the area of ​​the inner ear, the most complex and sensitive.

Inner ear

The most complex form is the inner ear, called the labyrinth for this reason. The bony labyrinth includes: vestibule, cochlea and semicircular canals, as well as the vestibular apparatus, responsible for balance. The cochlea is directly related to hearing in this connection. The cochlea is a spiral-shaped membranous canal filled with lymphatic fluid. Inside, the channel is divided into two parts by another membranous partition called the "main membrane". This membrane consists of fibers of various lengths (more than 24,000 in total), stretched like strings, each string resonating with its own specific sound. The canal is divided by a membrane into the upper and lower scala, communicating at the apex of the cochlea. At the opposite end, the canal connects to the receptor apparatus of the auditory analyzer, which is covered with tiny hair cells. This hearing analyzer device is also called the “Organ of Corti”. When vibrations from the middle ear enter the cochlea, the lymphatic fluid filling the canal also begins to vibrate, transmitting vibrations to the main membrane. At this moment, the auditory analyzer apparatus comes into action, the hair cells of which, located in several rows, transform sound vibrations into electrical “nerve” impulses, which are transmitted along the auditory nerve to the temporal zone of the cerebral cortex. In such a complex and ornate way, a person will ultimately hear the desired sound.

Features of perception and speech formation

The mechanism of speech formation was formed in humans throughout the entire evolutionary stage. The meaning of this ability is to transmit verbal and non-verbal information. The first carries a verbal and semantic load, the second is responsible for conveying the emotional component. The process of creating and perceiving speech includes: wording the message; coding into elements according to the rules of the existing language; transient neuromuscular actions; vocal cord movements; emission of an acoustic signal; Next, the listener comes into action, carrying out: spectral analysis of the received acoustic signal and selection of acoustic features in the peripheral auditory system, transmission of selected features via neural networks, recognition of the language code (linguistic analysis), understanding of the meaning of the message.
The apparatus for generating speech signals can be compared to a complex wind instrument, but the versatility and flexibility of configuration and the ability to reproduce the slightest subtleties and details has no analogues in nature. The voice-forming mechanism consists of three inextricable components:

  1. Generator- lungs as a reservoir of air volume. The energy of excess pressure is stored in the lungs, then through the excretory canal, with the help of the muscular system, this energy is removed through the trachea connected to the larynx. At this stage, the air stream is interrupted and modified;
  2. Vibrator- consists of vocal cords. The flow is also affected by turbulent air jets (creating edge tones) and pulsed sources (explosions);
  3. Resonator- includes resonant cavities of complex geometric shape (pharynx, oral and nasal cavities).

The totality of the individual arrangement of these elements forms the unique and individual timbre of the voice of each person individually.

The energy of the air column is generated in the lungs, which create a certain flow of air during inhalation and exhalation due to the difference in atmospheric and intrapulmonary pressure. The process of energy accumulation is carried out through inhalation, the process of release is characterized by exhalation. This happens due to the compression and expansion of the chest, which is carried out with the help of two muscle groups: intercostal and diaphragm; with deep breathing and singing, the muscles of the abdominal press, chest and neck also contract. When you inhale, the diaphragm contracts and moves down, contraction of the external intercostal muscles raises the ribs and moves them to the sides, and the sternum forward. An increase in the chest leads to a drop in pressure inside the lungs (relative to atmospheric pressure), and this space is rapidly filled with air. When you exhale, the muscles relax accordingly and everything returns to its previous state (the chest returns to its original state due to its own gravity, the diaphragm rises, the volume of the previously expanded lungs decreases, intrapulmonary pressure increases). Inhalation can be described as a process that requires energy expenditure (active); exhalation is a process of energy accumulation (passive). Control of the process of breathing and speech formation occurs unconsciously, but when singing, breathing control requires a conscious approach and long-term additional training.

The amount of energy that is subsequently expended on the formation of speech and voice depends on the volume of stored air and on the amount of additional pressure in the lungs. The maximum developed pressure of a trained opera singer can reach 100-112 dB. Modulation of air flow by vibration of the vocal cords and the creation of subpharyngeal excess pressure, these processes occur in the larynx, which is a kind of valve located at the end of the trachea. The valve performs a dual function: it protects the lungs from foreign objects and maintains high pressure. It is the larynx that acts as the source of speech and singing. The larynx is a collection of cartilage connected by muscles. The larynx has a rather complex structure, the main element of which is a pair of vocal cords. It is the vocal cords that are the main (but not the only) source of voice production or “vibrator”. During this process, the vocal cords begin to move, accompanied by friction. To protect against this, a special mucous secretion is secreted, which acts as a lubricant. The formation of speech sounds is determined by vibrations of the ligaments, which leads to the formation of a flow of air exhaled from the lungs to a certain type of amplitude characteristic. Between the vocal folds there are small cavities that act as acoustic filters and resonators when required.

Features of auditory perception, listening safety, hearing thresholds, adaptation, correct volume level

As can be seen from the description of the structure of the human ear, this organ is very delicate and quite complex in structure. Taking this fact into account, it is not difficult to determine that this extremely delicate and sensitive device has a set of limitations, thresholds, etc. The human auditory system is adapted to perceive quiet sounds, as well as sounds of medium intensity. Prolonged exposure to loud sounds entails irreversible shifts in hearing thresholds, as well as other hearing problems, including complete deafness. The degree of damage is directly proportional to the time of exposure in a loud environment. At this moment, the adaptation mechanism also comes into force - i.e. Under the influence of prolonged loud sounds, sensitivity gradually decreases, perceived volume decreases, and hearing adapts.

Adaptation initially seeks to protect the hearing organs from too loud sounds, however, it is the influence of this process that most often forces a person to uncontrollably increase the volume level of the audio system. Protection is realized thanks to the work of the mechanism of the middle and inner ear: the stapes is retracted from the oval window, thereby protecting against excessively loud sounds. But the protection mechanism is not ideal and has a time delay, triggering only 30-40 ms after the start of sound arrival, and full protection is not achieved even after a duration of 150 ms. The protection mechanism is activated when the volume level exceeds 85 dB, while the protection itself is up to 20 dB.
The most dangerous, in this case, can be considered the phenomenon of “auditory threshold shift,” which usually occurs in practice as a result of prolonged exposure to loud sounds above 90 dB. The process of restoration of the auditory system after such harmful effects can last up to 16 hours. The threshold shift begins already at an intensity level of 75 dB, and increases proportionally with increasing signal level.

When considering the problem of the correct level of sound intensity, the worst thing to realize is the fact that problems (acquired or congenital) associated with hearing are practically untreatable in our age of fairly advanced medicine. All this should lead any sane person to think about taking good care of their hearing, if, of course, they plan to preserve its pristine integrity and the ability to hear the entire frequency range for as long as possible. Fortunately, everything is not as scary as it might seem at first glance, and by following a number of precautions, you can easily preserve your hearing even in old age. Before considering these measures, it is necessary to remember one important feature of human auditory perception. The hearing aid perceives sounds nonlinearly. This phenomenon is as follows: if we imagine one frequency of a pure tone, for example 300 Hz, then nonlinearity appears when overtones of this fundamental frequency appear in the auricle according to the logarithmic principle (if the fundamental frequency is taken to be f, then the overtones of the frequency will be 2f, 3f etc. in increasing order). This nonlinearity is also easier to understand and is familiar to many under the name "nonlinear distortions". Since such harmonics (overtones) do not appear in the original pure tone, it turns out that the ear itself makes its own corrections and overtones to the original sound, but they can only be determined as subjective distortions. At intensity levels below 40 dB, subjective distortion does not occur. As the intensity increases from 40 dB, the level of subjective harmonics begins to increase, but even at the level of 80-90 dB their negative contribution to the sound is relatively small (therefore, this intensity level can conditionally be considered a kind of “golden mean” in the musical field).

Based on this information, you can easily determine a safe and acceptable volume level that will not harm the auditory organs and at the same time will make it possible to hear absolutely all the features and details of the sound, for example, in the case of working with a “hi-fi” system. This "golden mean" level is approximately 85-90 dB. It is at this sound intensity that it is possible to hear everything that is contained in the audio path, while the risk of premature damage and hearing loss is minimized. A volume level of 85 dB can be considered almost completely safe. To understand what the dangers of loud listening are and why too low a volume level does not allow you to hear all the nuances of sound, let’s look at this issue in more detail. As for low volume levels, the lack of expediency (but more often subjective desire) of listening to music at low levels is due to the following reasons:

  1. Nonlinearity of human auditory perception;
  2. Features of psychoacoustic perception, which will be discussed separately.

The nonlinearity of auditory perception discussed above has a significant effect at any volume below 80 dB. In practice, it looks like this: if you turn on music at a quiet level, for example 40 dB, then the mid-frequency range of the musical composition will be most clearly heard, be it the vocals of the performer or instruments playing in this range. At the same time, there will be a clear lack of low and high frequencies, due precisely to the nonlinearity of perception and also to the fact that different frequencies sound at different volumes. Thus, it is obvious that in order to fully perceive the entirety of the picture, the frequency intensity level must be aligned as much as possible to a single value. Despite the fact that even at a volume level of 85-90 dB there is no idealized equalization of the volume of different frequencies, the level becomes acceptable for normal everyday listening. The lower the volume at the same time, the more clearly the characteristic nonlinearity will be perceived by ear, namely the feeling of the absence of the proper amount of high and low frequencies. At the same time, it turns out that with such nonlinearity it is impossible to speak seriously about reproducing high-fidelity “hi-fi” sound, because the accuracy of the original sound picture will be extremely low in this particular situation.

If you delve into these findings, it becomes clear why listening to music at a low volume level, although the most safe from a health point of view, is extremely negative for the ear due to the creation of clearly implausible images of musical instruments and voices, and the lack of scale of the sound stage. In general, quiet music playback can be used as background accompaniment, but it is completely contraindicated to listen to high “hi-fi” quality at low volume, for the above reasons of the impossibility of creating naturalistic images of the sound stage, which was formed by the sound engineer in the studio, at the sound recording stage. But not only low volume introduces certain restrictions on the perception of the final sound; the situation is much worse with increased volume. It is possible and quite simple to damage your hearing and significantly reduce sensitivity if you listen to music at levels above 90 dB for a long time. These data are based on a large number of medical studies, concluding that sound above 90 dB causes real and almost irreparable harm to health. The mechanism of this phenomenon lies in auditory perception and the structural features of the ear. When a sound wave with an intensity above 90 dB enters the ear canal, the middle ear organs come into play, causing a phenomenon called auditory adaptation.

The principle of what happens in this case is this: the stapes is moved away from the oval window and protects the inner ear from too loud sounds. This process is called acoustic reflex. To the ear, this is perceived as a short-term decrease in sensitivity, which may be familiar to anyone who has ever attended rock concerts in clubs, for example. After such a concert, a short-term decrease in sensitivity occurs, which after a certain period of time is restored to its previous level. However, restoration of sensitivity will not always happen and directly depends on age. Behind all this lies the great danger of listening to loud music and other sounds, the intensity of which exceeds 90 dB. The occurrence of an acoustic reflex is not the only “visible” danger of loss of auditory sensitivity. When exposed to too loud sounds for a long time, the hairs located in the area of ​​the inner ear (which respond to vibrations) become very deflected. In this case, the effect occurs that the hair responsible for the perception of a certain frequency is deflected under the influence of high-amplitude sound vibrations. At a certain point, such a hair may deviate too much and cannot return back. This will cause a corresponding loss of sensitivity at a specific frequency!

The worst thing about this whole situation is that ear diseases are practically untreatable, even with the most modern methods known to medicine. All this leads to certain serious conclusions: sound above 90 dB is dangerous to health and is almost guaranteed to cause premature hearing loss or a significant decrease in sensitivity. What’s even more unpleasant is that the previously mentioned property of adaptation comes into play over time. This process in human auditory organs occurs almost imperceptibly, i.e. a person who is slowly losing sensitivity is close to 100% likely not to notice this until the people around them themselves pay attention to the constant repeated questions, like: “What did you just say?” The conclusion in the end is extremely simple: when listening to music, it is vitally important not to allow sound intensity levels above 80-85 dB! There is also a positive side to this point: the volume level of 80-85 dB approximately corresponds to the level of music recording in a studio environment. This is where the concept of the “Golden Mean” arises, above which it is better not to rise if health issues are of any importance.

Even listening to music for a short period of time at a level of 110-120 dB can cause hearing problems, for example during a live concert. Obviously, it is sometimes impossible or very difficult to avoid this, but it is extremely important to try to do this in order to maintain the integrity of auditory perception. Theoretically, short-term exposure to loud sounds (not exceeding 120 dB), even before the onset of “auditory fatigue,” does not lead to serious negative consequences. But in practice, there are usually cases of prolonged exposure to sound of such intensity. People deafen themselves without realizing the full extent of the danger in a car when listening to an audio system, at home in similar conditions, or in the headphones of a portable player. Why does this happen, and what forces the sound to become louder and louder? There are two answers to this question: 1) The influence of psychoacoustics, which will be discussed separately; 2) The constant need to “shout out” some external sounds with the volume of the music. The first aspect of the problem is quite interesting, and will be discussed in detail further, but the second side of the problem leads more to negative thoughts and conclusions about an erroneous understanding of the true fundamentals of proper listening to hi-fi class sound.

Without going into specifics, the general conclusion about listening to music and the correct volume is as follows: listening to music should take place at sound intensity levels no higher than 90 dB, no lower than 80 dB in a room in which extraneous sounds from external sources (such as such as: conversations of neighbors and other noise behind the wall of the apartment; street noise and technical noise if you are in a car, etc.). I would like to highlight once and for all that it is precisely if such probably stringent requirements are met that you can achieve the long-awaited balance of volume, which will not cause premature unwanted damage to the auditory organs, and will also bring true pleasure from listening to your favorite musical works with the smallest sound details at high and low frequencies and precision, which is pursued by the very concept of “hi-fi” sound.

Psychoacoustics and features of perception

In order to most fully answer some important questions regarding the final human perception of sound information, there is a whole branch of science that studies a huge variety of such aspects. This section is called "psychoacoustics". The fact is that auditory perception does not end only with the functioning of the auditory organs. After the direct perception of sound by the organ of hearing (ear), then the most complex and little-studied mechanism for analyzing the information received comes into play; this is entirely the responsibility of the human brain, which is designed in such a way that during operation it generates waves of a certain frequency, and they are also designated in Hertz (Hz). Different frequencies of brain waves correspond to certain human states. Thus, it turns out that listening to music helps to change the brain's frequency tuning, and this is important to consider when listening to musical compositions. Based on this theory, there is also a method of sound therapy by directly influencing a person’s mental state. There are five types of brain waves:

  1. Delta waves (waves below 4 Hz). Corresponds to a state of deep sleep without dreams, while there is a complete absence of body sensations.
  2. Theta waves (4-7 Hz waves). State of sleep or deep meditation.
  3. Alpha waves (waves 7-13 Hz). State of relaxation and relaxation during wakefulness, drowsiness.
  4. Beta waves (waves 13-40 Hz). State of activity, everyday thinking and mental activity, excitement and cognition.
  5. Gamma waves (waves above 40 Hz). A state of intense mental activity, fear, excitement and awareness.

Psychoacoustics, as a branch of science, seeks answers to the most interesting questions regarding the final human perception of sound information. In the process of studying this process, a huge number of factors are revealed, the influence of which invariably occurs both in the process of listening to music and in any other case of processing and analyzing any sound information. A psychoacoustician studies almost the entire variety of possible influences, starting with the emotional and mental state of a person at the time of listening, ending with the structural features of the vocal cords (if we are talking about the peculiarities of perceiving all the subtleties of vocal performance) and the mechanism of converting sound into electrical impulses of the brain. The most interesting, and most importantly important factors (which are vitally important to take into account every time you listen to your favorite musical compositions, as well as when building a professional audio system) will be discussed further.

The concept of consonance, musical consonance

The structure of the human auditory system is unique primarily in the mechanism of sound perception, the nonlinearity of the auditory system, and the ability to group sounds by height with a fairly high degree of accuracy. The most interesting feature of perception is the nonlinearity of the auditory system, which manifests itself in the form of the appearance of additional non-existent (in the fundamental tone) harmonics, especially often manifested in people with musical or absolute pitch. If we stop in more detail and analyze all the subtleties of the perception of musical sound, then the concept of “consonance” and “dissonance” of various chords and sound intervals can easily be distinguished. Concept "consonance" is defined as a consonant (from the French word “agreement”) sound, and accordingly vice versa, "dissonance"- discordant, discordant sound. Despite the variety of different interpretations of these concepts, the characteristics of musical intervals, it is most convenient to use the “musical-psychological” decoding of the terms: consonance is defined and felt by a person as a pleasant and comfortable, soft sound; dissonance on the other hand, it can be characterized as a sound that causes irritation, anxiety and tension. Such terminology is slightly subjective in nature, and also, throughout the history of the development of music, completely different intervals have been taken as “consonant” and vice versa.

Nowadays, these concepts are also difficult to perceive unambiguously, since there are differences among people with different musical preferences and tastes, and there is no generally accepted and agreed upon concept of harmony. The psychoacoustic basis for the perception of various musical intervals as consonant or dissonant directly depends on the concept of the “critical band”. Critical band- this is a certain bandwidth within which auditory sensations change dramatically. The width of the critical bands increases proportionally with increasing frequency. Therefore, the sensation of consonances and dissonances is directly related to the presence of critical bands. The human auditory organ (ear), as mentioned earlier, plays the role of a bandpass filter at a certain stage of the analysis of sound waves. This role is assigned to the basilar membrane, on which 24 critical bands with frequency-dependent widths are located.

Thus, consonance and inconsistency (consonance and dissonance) directly depend on the resolution of the auditory system. It turns out that if two different tones sound in unison or the frequency difference is zero, then this is perfect consonance. The same consonance occurs if the frequency difference is greater than the critical band. Dissonance occurs only when the frequency difference is from 5% to 50% of the critical band. The highest degree of dissonance in a given segment is audible if the difference is one quarter of the width of the critical band. Based on this, it is easy to analyze any mixed musical recording and combination of instruments for consonance or dissonance of sound. It is not difficult to guess what a big role the sound engineer, recording studio and other components of the final digital or analogue audio track play in this case, and all this even before attempting to play it on sound reproducing equipment.

Sound localization

The system of binaural hearing and spatial localization helps a person to perceive the fullness of the spatial sound picture. This perception mechanism is realized through two hearing receivers and two auditory channels. The sound information that arrives through these channels is subsequently processed in the peripheral part of the auditory system and is subjected to spectrotemporal analysis. Further, this information is transmitted to the higher parts of the brain, where the difference between the left and right sound signals is compared, and a single sound image is formed. This described mechanism is called binaural hearing. Thanks to this, a person has the following unique capabilities:

1) localization of sound signals from one or more sources, thereby forming a spatial picture of the perception of the sound field
2) separation of signals coming from different sources
3) highlighting some signals against the background of others (for example, isolating speech and voice from noise or the sound of instruments)

Spatial localization is easy to observe with a simple example. At a concert, with a stage and a certain number of musicians on it at a certain distance from each other, you can easily (if desired, even by closing your eyes) determine the direction of arrival of the sound signal of each instrument, evaluate the depth and spatiality of the sound field. In the same way, a good hi-fi system is valued, capable of reliably “reproducing” such effects of spatiality and localization, thereby actually “deceiving” the brain into feeling a full presence at the live performance of your favorite performer. The localization of a sound source is usually determined by three main factors: time, intensity and spectral. Regardless of these factors, there are a number of patterns that can be used to understand the basics regarding sound localization.

The greatest localization effect perceived by human hearing is in the mid-frequency region. At the same time, it is almost impossible to determine the direction of sounds of frequencies above 8000 Hz and below 150 Hz. The latter fact is especially widely used in hi-fi and home theater systems when choosing the location of the subwoofer (low-frequency section), the location of which in the room, due to the lack of localization of frequencies below 150 Hz, is practically irrelevant, and the listener in any case has a holistic image of the sound stage. The accuracy of localization depends on the location of the source of sound wave radiation in space. Thus, the greatest accuracy of sound localization is observed in the horizontal plane, reaching a value of 3°. In the vertical plane, the human auditory system is much worse at determining the direction of the source; the accuracy in this case is 10-15° (due to the specific structure of the ears and complex geometry). The localization accuracy varies slightly depending on the angle of the sound-emitting objects in space relative to the listener, and the final effect is also influenced by the degree of diffraction of sound waves from the listener's head. It should also be noted that broadband signals are localized better than narrowband noise.

The situation with determining the depth of directional sound is much more interesting. For example, a person can determine the distance to an object by sound, however, this happens to a greater extent due to changes in sound pressure in space. Typically, the further the object is from the listener, the more the sound waves in free space are attenuated (in the room the influence of reflected sound waves is added). Thus, we can conclude that the localization accuracy is higher in a closed room precisely due to the occurrence of reverberation. Reflected waves arising in enclosed spaces make it possible to create such interesting effects as expansion of the sound stage, enveloping, etc. These phenomena are possible precisely due to the sensitivity of three-dimensional sound localization. The main dependencies that determine the horizontal localization of sound: 1) the difference in the time of arrival of the sound wave in the left and right ear; 2) differences in intensity due to diffraction on the listener's head. To determine the depth of sound, the difference in sound pressure level and the difference in spectral composition are important. Localization in the vertical plane is also strongly dependent on diffraction in the auricle.

The situation is more complicated with modern surround sound systems based on dolby surround technology and analogues. It would seem that the principles of constructing home theater systems clearly regulate the method of recreating a fairly naturalistic spatial picture of 3D sound with the inherent volume and localization of virtual sources in space. However, not everything is so trivial, since the very mechanisms of perception and localization of a large number of sound sources are usually not taken into account. The transformation of sound by the organs of hearing involves the process of adding signals from different sources arriving at different ears. Moreover, if the phase structure of different sounds is more or less synchronous, such a process is perceived by ear as a sound emanating from one source. There are also a number of difficulties, including the peculiarities of the localization mechanism, which makes it difficult to accurately determine the direction of the source in space.

In view of the above, the most difficult task becomes the separation of sounds from different sources, especially if these different sources play a similar amplitude-frequency signal. And this is exactly what happens in practice in any modern surround sound system, and even in a conventional stereo system. When a person listens to a large number of sounds emanating from different sources, the first step is to determine whether each specific sound belongs to the source that creates it (grouping by frequency, pitch, timbre). And only at the second stage does hearing try to localize the source. After this, incoming sounds are divided into streams based on spatial characteristics (difference in time of arrival of signals, difference in amplitude). Based on the information received, a more or less static and fixed auditory image is formed, from which it is possible to determine where each specific sound comes from.

It is very convenient to track these processes using the example of an ordinary stage, with musicians fixedly located on it. At the same time, it is very interesting that if the vocalist/performer, occupying an initially certain position on the stage, begins to smoothly move around the stage in any direction, the previously formed auditory image will not change! Determining the direction of the sound emanating from the vocalist will remain subjectively the same, as if he were standing in the same place where he stood before moving. Only in the event of a sudden change in the performer’s location on stage will the formed sound image be split. In addition to the problems discussed and the complexity of the processes of localizing sounds in space, in the case of multi-channel surround sound systems, the reverberation process in the final listening room plays a rather large role. This dependence is most clearly observed when a large number of reflected sounds come from all directions - the localization accuracy deteriorates significantly. If the energy saturation of reflected waves is greater (predominant) than direct sounds, the localization criterion in such a room becomes extremely blurred, and it is extremely difficult (if not impossible) to talk about the accuracy of determining such sources.

However, in a strongly reverberating room localization theoretically occurs; in the case of broadband signals, hearing is guided by the intensity difference parameter. In this case, the direction is determined using the high-frequency component of the spectrum. In any room, the accuracy of localization will depend on the time of arrival of reflected sounds after direct sounds. If the gap between these sound signals is too small, the “law of the direct wave” begins to work to help the auditory system. The essence of this phenomenon: if sounds with a short time delay interval come from different directions, then the localization of the entire sound occurs according to the first arriving sound, i.e. the ear ignores, to some extent, reflected sound if it arrives too soon after the direct sound. A similar effect also appears when the direction of sound arrival in the vertical plane is determined, but in this case it is much weaker (due to the fact that the sensitivity of the auditory system to localization in the vertical plane is noticeably worse).

The essence of the precedence effect is much deeper and is of a psychological rather than physiological nature. A large number of experiments were carried out, on the basis of which the dependence was established. This effect occurs primarily when the time of occurrence of the echo, its amplitude and direction coincide with some of the listener’s “expectations” of how the acoustics of a particular room form the sound image. Perhaps the person has already had listening experience in this room or similar ones, which predisposes the auditory system to the occurrence of the “expected” precedence effect. To circumvent these limitations inherent in human hearing, in the case of several sound sources, various tricks and tricks are used, with the help of which a more or less plausible localization of musical instruments/other sound sources in space is ultimately formed. By and large, the reproduction of stereo and multi-channel sound images is based on great deception and the creation of an auditory illusion.

When two or more speaker systems (for example, 5.1 or 7.1, or even 9.1) reproduce sound from different points in the room, the listener hears sounds emanating from non-existent or imaginary sources, perceiving a certain sound panorama. The possibility of this deception lies in the biological features of the human body. Most likely, a person did not have time to adapt to recognizing such deception due to the fact that the principles of “artificial” sound reproduction appeared relatively recently. But, although the process of creating an imaginary localization turned out to be possible, the implementation is still far from perfect. The fact is that the ear really perceives a sound source where it actually does not exist, but the correctness and accuracy of the transmission of sound information (in particular timbre) is a big question. Through numerous experiments in real reverberation rooms and in anechoic chambers, it was established that the timbre of sound waves from real and imaginary sources is different. This mainly affects the subjective perception of spectral loudness; the timbre in this case changes in a significant and noticeable way (when compared with a similar sound reproduced by a real source).

In the case of multi-channel home theater systems, the level of distortion is noticeably higher for several reasons: 1) Many sound signals similar in amplitude-frequency and phase characteristics simultaneously arrive from different sources and directions (including reflected waves) to each ear canal. This leads to increased distortion and the appearance of comb filtering. 2) Strong separation of loudspeakers in space (relative to each other; in multi-channel systems this distance can be several meters or more) contributes to the growth of timbre distortions and sound coloration in the area of ​​the imaginary source. As a result, we can say that timbre coloring in multi-channel and surround sound systems in practice occurs for two reasons: the phenomenon of comb filtering and the influence of reverberation processes in a particular room. If more than one source is responsible for the reproduction of sound information (this also applies to a stereo system with two sources), the appearance of a “comb filtering” effect is inevitable, caused by different arrival times of sound waves at each auditory channel. Particular unevenness is observed in the upper midrange of 1-4 kHz.

Equipment.

Table “Organ of Hearing”, model “Organ of Hearing”, homemade tables “Sound Source”, “Sound Receiver”, “Noise”, “Audibility Range”. Generator, tuning fork, tuning fork with resonator box, microphone, oscilloscope, tape recorder (recording from planet Earth).

Lesson objectives:

1. Developmental goals.

  • To develop logical thinking in schoolchildren, to consider sound, its sources, perception and transmission from the point of view of biology, physics, astronomy, geography, biology and ecology.
  • Formation of the integrity of the natural-scientific picture of the world in children.
  • Develop will and independence. Develop self-control: self-confidence, the ability to overcome difficulties in learning natural sciences.
  • To develop intellectual skills: the ability to analyze, compare the hearing organs with a microphone.

2. Educational purposes.

  • Ensure that students understand the basics of science.
  • Summarize and consolidate, systematize previously acquired knowledge in the subjects of biology, physics, astronomy, chemistry, ecology, geography.
  • Develop skills in working with game elements, video clips, and illustrative materials.
  • To create a culture of health in biology lessons.
  • To form a holistic idea of ​​nature and man as an important component of nature and as an intelligent being influencing nature.

3. Educational goals.

  • To educate an independent, free person, who has a sensory perception of nature, who is proficient in various ways of cognition.
  • To foster environmental culture and thinking in students.

Lesson type: learning new material.

Lesson type: combined lesson.

Means of education: computer, projector, multimedia teaching aids, slides with illustrations, terms, concepts, experiments, video demonstrations.

Lesson plan: (slide -2)

During the classes

I. Organizational moment.

II. Updating knowledge.

Even G. Helmholtz believed that the camera represents a model of the human eye. Find similar formations in the eye and in the camera and connect them with lines.

III. Learning new material.

1. Characteristics of planet Earth.

The Earth is a blue planet, its shape is an ellipsoid of rotation, or more precisely, a cardioid. Average radius R= 6400 km, planet mass m=6* 10 24 kg. (slide 3). There are colors and sounds in this world, but most importantly, there is intelligent life on Earth.

Man lives in a world of sounds: birdsong, the sounds of music, the noise of the forest, transport, ...

2. What is the source of sound?

The sources of sound are oscillating bodies, we will prove this experimentally. Let's assemble the installation shown on the slide.

Demonstration: We brought a tuning fork from Earth - a device that is a curved metal rod on a leg (Figure 1). If we hit the leg of a tuning fork with a hammer, we will hear the sound that the oscillating rod makes. The sound is not loud because the surface area of ​​the branches of the rod is small. To amplify the sound, the leg of the tuning fork is fixed on a wooden box, selected so that the frequency of its own vibrations coincides with the frequency of vibration of the tuning fork. A resonance occurs, the walls of the box begin to vibrate intensely at the frequency of the tuning fork, and the sound becomes louder. The box is called a resonator (slide). What is the function of a frog's resonator?

The vibrations of a sounding tuning fork can be observed in another way. To do this, attach a needle to one branch of the tuning fork and quickly draw its tip along the smoked glass plate. If the tuning fork did not sound, we will see a straight line on the plate (Figure 2). The sound of a tuning fork leaves a mark on the plate in the form of a wavy line. One complete oscillation corresponds to one protrusion and one depression of this line (Figure 2) (slide 4).

Conclusions from experience: Any sound source necessarily vibrates (most often these vibrations are invisible to the eye).

3. Let us now consider how sound travels.

Students' explanation: an oscillating piston - diffuser, pushing air molecules, creates areas of condensation and rarefaction. The directions of sound propagation and the movement of air molecules coincide, so sound is a longitudinal wave.

Disturbance waves propagating in any medium or space over time (slide 5). The most important and common types of waves are elastic waves, liquid surface waves and electromagnetic waves.

4. What is the conductor of sound?

Students' conclusion from the experience: for sound to propagate, an elastic medium, like air, is needed. There is no atmosphere on the Moon, so there are no sounds there - it is a world of silence. Elastic bodies – good conductors of sound. Most metals, wood, gases, and liquids are elastic bodies and therefore conduct sound well.

Sound can travel in liquid and solid media. The table “Speed ​​of sound in various media” from the physics textbook, page 125 is displayed (slide 7)

Speed ​​of sound in various media, m/s (at t=20 C)

The table shows that in metal the speed of propagation of sound waves is greater than in liquids, and in liquids it is greater than in gases. Therefore, underwater the sounds of propellers and the impacts of stones can be clearly heard... Fish hear the footsteps and voices of people on the shore, this is well known to fishermen. The sound of a moving train can be heard if you put your ear to the rails, since sound travels along them better than through air. When you put your ear to the ground, you can hear the clatter of a galloping horse.

Student conclusions:

  1. The source of sound is vibrating bodies.
  2. Sound travels through an elastic medium.
  3. Soft and porous bodies are poor conductors of sound.
  4. Sound cannot travel in airless space.
  5. The volume of sound depends on the surface area of ​​oscillating bodies.

5. People communicate using speech - modulated sound vibrations. Let's look at how a person's sound source works (slide 8).

The sound occurs when air passes through the vocal cords, which are located between the cartilages of the larynx and are formed by folds of the mucous membrane (explanation follows in the table). The space between the vocal cords is called the glottis. When earthlings are silent, the vocal cords diverge and the glottis looks like an isosceles triangle. When speaking or singing, the vocal cords close. The exhaled air presses on the folds, they begin to vibrate - a sound is born. When whispering, they are completely closed. The vocal cords are controlled by the brain, sending appropriate signals along the nerves.

The pitch of a person's voice is related to the length of the vocal cords: the shorter the vocal cords, the greater the frequency of their vibrations and the higher the voice. Females have shorter vocal cords than males, which is why female voices are higher. The vocal cords can vibrate between 80 and 10,000 times per second. The final formation of sound occurs in the cavities of the nasopharynx - a kind of resonators.

6. How is sound perceived?

We know that the source of sound is a vibrating body and that sound propagates in an elastic medium. Now let's find out how sound is perceived.

Receiver sound maybe microphone . The microphone converts sound mechanical vibrations into electrical ones. The signals picked up are weak and the energy converted by the microphone is very small. Therefore, the electrical signals from the microphone are amplified.

- Receiver sound appears among earthlings hearing aid or organ of hearing . Between the sounding body (sound source) and the ear (sound receiver) there is a substance that transmits sound vibrations from source to receiver. Most often, this substance is air.

The hearing organ of earthlings consists of three sections: the outer ear, the middle ear, and the inner ear. The outer ear is formed by the pinna, the external auditory canal and the eardrum. Its function is to capture sound and conduct it. The middle ear is represented by an air-filled chamber with a volume of 1-2 ml. In this chamber there are three bones that move with each other: the malleus, the incus, and the stirrup. The malleus is connected to the eardrum, and the stapes is connected to the inner ear through the oval window. The middle ear is connected to the nasopharynx through the Eustachian tube. During sudden changes in pressure (takeoff and landing of an airplane, rise of a submarine), it is recommended to talk, open your mouth, and swallow, since this opens the Eustachian tube and equalizes the pressure on the eardrum on both sides (slide -9).

The inner ear is located in the thickness of the temporal bone (slide 10), inside which there is a membranous labyrinth. The inner ear is filled with fluid. It consists of three semicircular canals - the vestibular apparatus, which is not related to the perception of sound, and the cochlea, which has the appearance of a spiral canal. The main membrane stretches along the cochlear canal, across which fibers are stretched like a ladder. These fibers contain columnar epithelial cells that form the organ of Corti. Sensitive fibers of the auditory nerve end on epithelial cells. In the cochlea, sound energy is converted into the energy of nerve impulses, which are transmitted along the auditory nerve to the auditory center located in the temporal lobe of the cerebral cortex.

Its operating principle is the same as that of a microphone.

7. How is sound transmitted?

Sound vibrations in the air cause vibrations in the eardrum, which corresponds to the membrane of the microphone, and are transmitted through the auditory ossicles to the inner ear, where they cause vibrations in the fluid filling the cochlear canal. At the same time, the fibers of the main membrane and the so-called hair cells of the organ of Corti begin to vibrate. With each rise, their hairs rest against the integumentary membrane, the hairs bend, the membrane potential of the cells changes and excitation occurs in the nerve fibers (slide 11).

The brain constantly processes incoming impulses, resulting in sound sensations.

8. Ecology of hearing.

The human sound receiver is negatively affected by noise. Noise is any kind of sound that is perceived as unpleasant, disturbing or even painful. Typical examples of noise are whistling, crackling, hissing. (The story is accompanied by audio noises).

Under constant sharp impacts of sound waves, the eardrum vibrates with a large amplitude. Because of this, it gradually loses its elasticity, and earthlings’ hearing becomes dull. In addition, through the organ of hearing, noise affects the central nervous system. And it can cause a variety of physiological (increased heartbeat, increased blood pressure) and mental disorders (decreased attention, nervousness). Long-term exposure to noise is one of the factors contributing to the development of ulcers and even infectious diseases. As a result, the life expectancy of earthlings is shortening and the gene pool of humanity is decreasing.

As a rule, noise annoys us: it interferes with our work, rest, and thinking. But noise can also have a calming effect. Such an influence on a person is exerted, for example, by the rustling of leaves, the roar of the sea surf. (The story is accompanied by sound recordings).

What is noise? It is understood as random complex vibrations of various physical natures.

Noise pollution in the environment is increasing all the time.

9. Quantitative characteristics of sound. Slide 12.

Noise is a type of sound, although it is often called “unwanted sound.” A person hears sounds with an oscillation frequency in the range of 16-20,000 Hz. When a sound wave propagates from condensations and rarefactions of air, the pressure on the eardrum changes. The unit of pressure is 1N/m2, and the unit of sound power is 1W/m2.

The minimum sound volume that a person perceives is called the hearing threshold. It is different for different people, and therefore, conventionally, the hearing threshold is considered to be a sound pressure equal to 2* 10 -5 N/m 2 at 1000 Hz, corresponding to a power of 10 -12 W/m 2. It is with these values ​​that the measured sound is compared.

The loudness unit is called Bel - after the inventor of the telephone A. Bel (1847-1922). Loudness is measured in decibels: 1 dB = 1.1 B (Bel).

The perception of sound depends not only on its quantitative characteristics (pressure and power). But also on its quality - frequency. The same sound at different frequencies differs in volume. Some people cannot hear high frequency sounds. Thus, in older people, the upper limit of sound perception decreases to 6000 Hz. They do not hear, for example, the squeak of a mosquito, which makes sounds with a frequency of about 20,000 Hz

Let's look at the “Noise” table. It shows various noise sources. Sounds ranging from 0 to 80 dB are pleasant to perceive and do not cause negative emotions. (Tape recording starts: birds singing, pleasant music, whispers...)

If the volume exceeds 80 dB, the noise has a harmful effect on health: it increases blood pressure, causes heart rhythm disturbances, and prolonged exposure to intense noise leads to deafness.

A very strong sound (with a volume above 180 dB) can even cause a rupture of the eardrum. Noise must be dealt with. The ability to maintain silence is an indicator of a person’s culture and his good attitude towards others. Earthlings need silence just as much as they need the sun and fresh air.

10. Noise pollution in the city of Naberezhnye Chelny.

In our city, the main source of noise is road transport. We don't have plants or factories. The sources of noise in residential and public premises are, first of all, the activity of people (talking, shouting, playing musical instruments, walking, moving furniture) and the associated operation of radio and television receivers, tape recorders, electromechanical household appliances, as well as the operation of sanitary facilities. -technical equipment.

Ecology and hearing hygiene (story on slide -13).

Hearing impairment and weakening can be caused by:

1. Internal changes(according to the table)

  • Damage to the auditory nerve -> disruption of impulse transmission to the auditory cortex.
  • The formation of a “cerumen plug” in the external auditory canal -> disruption of the transmission of sound vibrations to the inner ear.

2. External factors(slide 14)

It is forbidden: (slide-15)

  • Listen to very loud music.
  • In case of strong, sharp sounds, keep your mouth open.
  • In strong winds and sub-zero temperatures, walk without a hat.
  • Try to remove foreign objects from the ear canal on your own.

IV. Conclusion.

But absolute silence also depresses a person. In complete silence, for example in a soundproofing chamber, sounds and rustling noises that under normal conditions go unnoticed immediately begin to disturb you - heartbeats, pulses, breathing and even the rustling of eyelashes. These usually inaudible sounds in conditions of absolute silence are perceived by a person with such intensity that they can cause serious mental disorders in people who have been in a soundproof chamber for a long time. As we see, the nature of noise is dual: it is harmful and necessary at the same time. Therefore, when talking about the fight against noise, you need to remember that we are not talking about all sounds in general, but only about unwanted, irritating, harmful effects on the body. It has been established, for example, that people with mental work, people with developed sensitivity (scientists, representatives of creative professions) feel the impact of noise more acutely than representatives of other forms of employment. Therefore, from a subjective point of view, noise can be defined as any unwanted, disturbing, harmful sound.

Noises that are sharp, unstable, unexpected, or irregularly repeated are especially harmful. People live in a world of sounds. Sound is a mechanical wave. The human sound receiver - the ear - perceives only waves with a frequency from 16 to 20,000 Hz as sounds. With their voice, people can convey not only information, but also feelings and mood: joy, anger, threat, ridicule.

V. Homework: Slide 16, 17.

  • Level 1 (according to the program): Work according to the textbook.
  • Level 2 (semi-creative level):

Answer the following questions:

  1. Why do they tap with a hammer when checking carriage wheels while the train is parked?
  2. In your opinion, will sound waves from the environment be perceived by a person if any part of the auditory analyzer is damaged (justify your answer)?
  3. How do you think sound vibrations are transmitted from the environment to the auditory receptors of earthlings?
  4. The vibration frequency of hummingbird wings is 35-50Hz. Will you be able to hear the hummingbirds flying?
  5. Two people listen, hoping to hear the sound of an approaching train. One of them put his ear to the rails, the other did not. Which of them will know first about the approaching train and why?
  • Level 3. Find similar formations in the structure of the microphone and the hearing organ.

Compare the structure of the microphone and the hearing organ (slide 18).

LITERATURE(slide-19-20)

  1. Rezanova E.A., Antonova I.P. Human biology in tables, figures and diagrams. – M.: Publishing house - school, 1998.
  2. Translation from English O.V. Ivanova. Human anatomy. How your body works. - M.: LLC TD “Publishing House World of Books”, 2007.- 80-83 p., ill.
  3. Peryshkin A.V., Gutnik E., M. Physics, 9th grade. - M.: Bustard, 2001.
  4. Mangutova L.A., Zefirova T.P. Popular ecology. – Kazan: Ecological Fund of the Republic of Tatarstan, 1997.
  5. Tsuzmet A.M., Petrishina O.L., Biology. Man and his health. 9th grade. - M.: Education, 1990.
  6. Sonin N.I., Sapin M.R. Biology. Human. 8th grade. – M.: Bustard, 2001.
  7. Sapin M.R., Bilich G.L. Human Anatomy. - M.: Higher School, 1989.
  8. Bordovsky G.A. Physical foundations of natural science. - M.: Bustard, 2004.
  9. Bogdanova T.L., Solodova E.A. Biology. Handbook for high school students and applicants to universities. – M.: AST – PRESS SCHOOL, 2004.
  10. Dobrenkov G.A. Worldview functions of physical chemistry // Chemistry and worldview / Responsible. ed. Yu.A. Ovchinnikov. – M.: Science. – 1986.
  11. Kuzmenko N.E., Eremin V.V., The beginnings of chemistry. – M.: Exam, 2001.
  12. Kutyina I.V. Formation of a scientific worldview. The relationship between physics, chemistry, biology. // Biology. Weekly supplement to the newspaper “First of September”. – 1998. – No. 1-10.
  13. Ozherelev D.I. Formation of a scientific worldview in teaching chemistry. – M.: Higher School, 1982.
  14. Chernova N.M. Ecology. - M.: Education, 1988.
  15. Reimers N.P. Protection of nature and the human environment. - M.: Education, 1992.

Later in the course of evolution, the highest types of sensitivity arose - the perception of sounds (hearing) and light (). The exceptional importance of hearing and vision lies in the fact that they already signal from afar about certain objects and phenomena in the environment. Therefore, in physiology they are called distant analyzers. The highest type of chemical sensitivity, the sense of smell, also has this property to a large extent. However, it reaches a special degree of development precisely in the organs of hearing and vision.

Arose on the basis of sensitivity to mechanical irritation. However, here it is no longer the touch of certain objects that is perceived, but incomparably more subtle phenomena - air vibrations. The perception of air vibrations is of enormous importance.

All objects around us - solids, liquids and gases - have a certain elasticity. Therefore, when one body comes into contact with another, and even more so when they hit each other, these bodies perform a series of oscillatory movements - simply put, they vibrate and tremble. There is no emptiness in the nature immediately surrounding us. Therefore, any movement of one object leads to its contact with another - the objects vibrate, and these vibrations are transmitted to the air. As a result, we hear sound - information about movement around us. Whether an anvil trembles under the blows of a hammer, whether water vibrates from a stone thrown into it, whether a singer’s vocal cords tremble under the pressure of a stream of air, whether the pages of a book tremble under the hand turning them over - all this causes vibrations in the air, spreading around at a speed of 340 m per second, or 1 km in 3 seconds and we hear the sound. How is it perceived?

Air vibrations affect the thin but elastic membrane against which the external auditory canal rests; This membrane is the eardrum. Its thickness is 0.1 mm. From it, through a chain of three tiny bones, which reduce the range of vibrations by 50 times, but increase their strength by 50 times, the vibrations are transmitted to the fluid located in the inner ear. Only here, in fact, does the perception of sound begin. Since the eardrum is only one of the links in the transmission of sound to the inner ear, damage to its integrity does not lead to hearing loss, although, of course, it somewhat reduces it.

The main part of the inner ear is a tube, twisted in the shape of a snail, and therefore called the cochlea. Between its walls are stretched about 24 thousand of the finest fibers, threads, the length of which gradually decreases from the top of the cochlea to its base. These are our strings. If you pronounce a sound loudly in front of the piano, the piano will answer us. If we play bass, the piano will respond with a low sound. If we squeaked, we will hear a high-pitched sound in response. This phenomenon is called resonance. Each piano string is tuned to sound at a certain pitch, that is, to vibrate at a certain frequency (the more frequently the vibrations, the higher the sound appears). If a string is exposed to air vibrations of the same frequency as the frequency to which it is tuned, the string resonates, responds.

The perception of sound by our ears is based on the same principle. Due to the different lengths of the fibers, each of them is tuned to a certain oscillation frequency - from 16 to 20,000 per second. Long fibers at the apex of the cochlea perceive low-frequency vibrations, i.e., low sounds, and short fibers at the base of the cochlea perceive frequent vibrations. This was proven by I. P. Pavlov’s student, the subtle experimenter L. A. Andreev. The method finally made it possible to find out whether the animal hears certain sounds when one or another part of the cochlea is destroyed. It was found that if the upper part of the cochlea is destroyed in a dog, then no matter how many times low sounds are given before feeding, a conditioned reflex to them will not form. This undoubtedly proves that the animal no longer perceives these sounds. In this way, a number of sections of the cochlea were “probed”. Only the experiments of L.A. Andreev finally proved that the fibers of the cochlea are indeed our resonators. The famous G. Helmholtz, who put forward the resonance theory of hearing back in the last century, did not have the opportunity to prove it experimentally.

If the air vibrates more than 20,000 times per second, we no longer perceive these vibrations with our ears. They are called ultrasounds. In dogs, as studies using the conditioned reflex method have shown, the hearing limit reaches 40,000 Hz. This means that the dog hears ultrasounds that are inaccessible to humans. This can be used, by the way, by circus trainers to give secret signals to animals.

Structural and functional characteristics of the auditory analyzer

General concepts of the physiology of the auditory analyzer

HEARING ANALYZER

With the help of an auditory analyzer, a person navigates the sound signals of the environment and forms appropriate behavioral reactions, for example defensive or food-procuring. A person’s ability to perceive spoken and vocal speech and musical works makes the auditory analyzer a necessary component of the means of communication, cognition, and adaptation.

An adequate stimulus for the auditory analyzer is sounds , i.e. oscillatory movements of particles of elastic bodies propagating in the form of waves in a wide variety of media, including air, and perceived by the ear .

Sound wave vibrations (sound waves) are characterized by frequency And amplitude .

The frequency of sound waves determines the pitch of the sound. A person distinguishes sound waves with a frequency from 20 to 20,000 Hz. Sounds with a frequency below 20 Hz - infrasounds and above 20,000 Hz (20 kHz) - ultrasounds, are not felt by humans. Sound waves that have sinusoidal, or harmonic, vibrations are called tone.

Sound consisting of unrelated frequencies is called noise.. When the frequency of sound waves is high, the tone is high, when it is low, the tone is low.

The second characteristic of sound that the auditory sensory system distinguishes is its force, depending on the amplitude of sound waves. The strength of sound is perceived by humans as loudness .

The sensation of loudness increases as the sound intensifies and also depends on the frequency of sound vibrations, i.e. The loudness of a sound is determined by the interaction of intensity (strength) and pitch (frequency) of sound. The unit of measurement for sound volume is white , in practice it is usually used decibel(dB), i.e. 0.1 bel. A person also distinguishes sounds by timbre, or "coloring". The timbre of the sound signal depends on the spectrum, i.e. from the composition of additional frequencies – overtones , which accompany the fundamental frequency - tone . By timbre, you can distinguish sounds of the same height and volume, which is the basis for recognizing people by voice.

Sensitivity of the auditory analyzer determined by the minimum sound intensity sufficient to produce an auditory sensation. In the range of sound vibrations from 1000 to 3000 per second, which corresponds to human speech, the ear has the greatest sensitivity. This set of frequencies is called speech zone .

Receptor (peripheral) section of the auditory analyzer, converting the energy of sound waves into the energy of nervous excitation, represented by receptor hair cells of the organ of Corti (Corti organ), located in the cochlea. Auditory receptors (phonoreceptors) belong to the mechanoreceptors, are secondary and are represented by inner and outer hair cells. Humans have approximately 3,500 inner and 20,000 outer hair cells, which are located on the basilar membrane inside the middle canal of the inner ear.



The inner ear (sound-receiving apparatus), as well as the middle ear (sound-transmitting apparatus) and the outer ear (sound-receiving apparatus) are combined into the concept organ of hearing (Fig. 2.6).

Outer ear Due to the auricle, it ensures the capture of sounds, their concentration in the direction of the external auditory canal and an increase in the intensity of sounds. In addition, the structures of the outer ear perform a protective function, protecting the eardrum from mechanical and temperature influences of the external environment.

Rice. 2.6. Hearing organ

Middle ear(sound-conducting section) is represented by the tympanic cavity, where three auditory ossicles are located: the malleus, the incus and the stapes. The middle ear is separated from the external auditory canal by the eardrum. The handle of the malleus is woven into the eardrum, its other end is articulated with the incus, which, in turn, is articulated with the stapes. The stapes is adjacent to the membrane of the oval window. The area of ​​the tympanic membrane (70 mm2) is significantly larger than the area of ​​the oval window (3.2 mm2), due to which the pressure of sound waves on the membrane of the oval window increases by approximately 25 times. Since the lever mechanism of the ossicles reduces the amplitude of sound waves by approximately 2 times, then, consequently, the same amplification of sound waves occurs at the oval window. Thus, the overall sound amplification in the middle ear occurs by approximately 60–70 times. If we take into account the amplifying effect of the outer ear, then this value reaches 180–200 times. The middle ear has a special protective mechanism represented by two muscles: the muscle that tightens the eardrum and the muscle that fixes the stapes. The degree of contraction of these muscles depends on the strength of sound vibrations. With strong sound vibrations, the muscles limit the amplitude of vibration of the eardrum and the movement of the stapes, thereby protecting the receptor apparatus in the inner ear from excessive stimulation and destruction. In case of instantaneous strong irritation (strike of a bell), this protective mechanism does not have time to operate. The contraction of both muscles of the tympanic cavity is carried out by the mechanism of an unconditioned reflex, which closes at the level of the brain stem. The pressure in the tympanic cavity is equal to atmospheric pressure, which is very important for adequate perception of sounds. This function is performed by the Eustachian tube, which connects the middle ear cavity to the pharynx. When swallowing, the tube opens, ventilating the cavity of the middle ear and equalizing the pressure in it with atmospheric pressure. If the external pressure changes rapidly (rapid rise to altitude), and swallowing does not occur, then the pressure difference between atmospheric air and the air in the tympanic cavity leads to tension of the eardrum and the occurrence of unpleasant sensations, a decrease in the perception of sounds.

Inner ear represented by the cochlea - a spirally twisted bone canal with 2.5 turns, which is divided by the main membrane and the Reissner membrane into three narrow parts (staircases). The superior canal (scala vestibularis) starts from the oval window and connects to the inferior canal (scala tympani) through the helicotrema (hole in the apex) and ends with the round window. Both canals are a single unit and are filled with perilymph, similar in composition to cerebrospinal fluid. Between the upper and lower channels there is a middle one (middle staircase). It is isolated and filled with endolymph. Inside the middle canal on the main membrane there is the actual sound-receiving apparatus - the organ of Corti (organ of Corti) with receptor cells, representing the peripheral section of the auditory analyzer (Fig. 2.7).

The main membrane near the oval window is 0.04 mm in width, then towards the apex it gradually expands, reaching 0.5 mm at the helicotrema. Above the organ of Corti lies a tectorial (integumentary) membrane of connective tissue origin, one edge of which is fixed, the other is free. The hairs of the outer and inner hair cells are in contact with the tectorial membrane. In this case, the conductivity of the ion channels of the receptor (hair) cells changes, and microphone and summation receptor potentials are formed.

Rice. 2.7. Organ of Corti

The mediator acetylcholine is formed and released into the synaptic cleft of the receptor-afferent synapse. All this leads to excitation of the auditory nerve fiber, to the emergence of an action potential in it. This is how the energy of sound waves is transformed into a nerve impulse. Each auditory nerve fiber has a frequency tuning curve, also called frequency-threshold curve. This indicator characterizes the area of ​​the receptive field of the fiber, which can be narrow or wide. It is narrow when sounds are quiet, and when their intensity increases, it expands.

Wiring department The auditory analyzer is represented by a peripheral bipolar neuron located in the spiral ganglion of the cochlea (the first neuron). The fibers of the auditory (or cochlear) nerve, formed by the axons of the neurons of the spiral ganglion, end on the cells of the nuclei of the cochlear complex of the medulla oblongata (second neuron). Then, after partial decussation, the fibers go to the medial geniculate body of the metathalamus, where switching occurs again (third neuron), from here the excitation enters the cortex (fourth neuron). In the medial (internal) geniculate bodies, as well as in the lower tuberosities of the quadrigeminal, there are centers of reflex motor reactions that occur when exposed to sound.

Central, or cortical, department The auditory analyzer is located in the upper part of the temporal lobe of the cerebrum (superior temporal gyrus, Brodmann areas 41 and 42). The transverse temporal gyrus (Heschl's gyrus) is important for the function of the auditory analyzer.

Auditory sensory system complemented by feedback mechanisms that provide regulation of the activity of all levels of the auditory analyzer with the participation of descending pathways. Such pathways begin from the cells of the auditory cortex, switching sequentially in the medial geniculate bodies of the metathalamus, the posterior (inferior) colliculus, and in the nuclei of the cochlear complex. As part of the auditory nerve, centrifugal fibers reach the hair cells of the organ of Corti and tune them to perceive certain sound signals.

Perception of pitch, sound intensity, and sound source location begins when sound waves enter the outer ear, where they vibrate the eardrum. Vibrations of the tympanic membrane through the system of auditory ossicles of the middle ear are transmitted to the membrane of the oval window, which causes vibrations of the perilymph of the vestibular (upper) scala. These vibrations are transmitted through the helicotrema to the perilymph of the scala tympani (lower) and reach the round window, displacing its membrane towards the cavity of the middle ear (Fig. 2.8).

Vibrations of the perilymph are also transmitted to the endolymph of the membranous (middle) canal, which causes the main membrane, consisting of individual fibers stretched like piano strings, to vibrate. When exposed to sound, the membrane fibers begin to vibrate along with the receptor cells of the organ of Corti located on them. In this case, the hairs of the receptor cells come into contact with the tectorial membrane, and the cilia of the hair cells are deformed. First, a receptor potential appears, and then an action potential (nerve impulse), which is then carried along the auditory nerve and transmitted to other parts of the auditory analyzer.

Electrical phenomena in the cochlea. Five different electrical phenomena can be detected in the cochlea.

1. The membrane potential of the auditory receptor cell characterizes the resting state.

2. The endolymph potential, or endocochlear potential, is caused by different levels of redox processes in the canals of the cochlea, resulting in a potential difference (80 mV) between the perilymph of the middle canal of the cochlea (the potential of which has a positive charge) and the contents of the upper and lower canals. This endocochlear potential influences the membrane potential of the auditory receptor cells, creating a critical level of polarization in them, at which a slight mechanical effect during contact of the hair receptor cells with the tectorial membrane leads to excitation in them.

Rice. 2.8. Cochlear canals:

A - middle and inner ear in section (according to P. Lindsay and D. Norman, 1974); b – propagation of sound vibrations in the cochlea

3. The snail microphone effect was obtained in an experiment on cats. Electrodes inserted into the cochlea were connected to an amplifier and loudspeaker. If the cat said various words next to the ear, then they can be heard while standing at a loudspeaker in another room. This potential is generated on the hair cell membrane as a result of deformation of the hairs in contact with the tectorial membrane. The frequency of microphone potentials corresponds to the frequency of sound vibrations, and the amplitude of potentials within certain limits is proportional to the intensity of speech sounds. Sound vibrations acting on the inner ear lead to the resulting microphonic effect superimposed on the endocochlear potential and causing its modulation.

4. The summation potential differs from the microphone potential in that it reflects not the shape of the sound wave, but its envelope. It is a set of microphone potentials that arise under the influence of strong sounds with a frequency above 4000 - 5000 Hz. Microphone and summation potentials are associated with the activity of outer hair cells and are considered as receptor potentials.

5. The action potential of the auditory nerve is recorded in its fibers; the frequency of the impulses corresponds to the frequency of sound waves, if it does not exceed 1000 Hz. When exposed to higher tones, the frequency of impulses in the nerve fibers does not increase, since 1000 impulses/s is almost the maximum possible frequency of impulse generation in the auditory nerve fibers. The action potential in the nerve endings is recorded 0.5–1.0 ms after the onset of the microphone effect, which indicates synaptic transmission of excitation from the hair cell to the auditory nerve fiber.

Perception of sounds of different pitches(frequency), according to Helmholtz's resonance theory, is due to the fact that each fiber of the main membrane is tuned to a sound of a certain frequency. Thus, low-frequency sounds are perceived by long waves of the main membrane, located closer to the apex of the cochlea, while high-frequency sounds are perceived by short fibers of the main membrane, located closer to the base of the cochlea. When exposed to complex sound, vibrations of various fibers of the membrane occur.

In the modern interpretation, the resonance mechanism underlies theories of place, according to which the entire membrane enters a state of vibration. However, the maximum deviation of the main membrane of the cochlea occurs only at a certain location. As the frequency of sound vibrations increases, the maximum deflection of the main membrane shifts to the base of the cochlea, where the shorter fibers of the main membrane are located - short fibers can have a higher vibration frequency. Excitation of the hair cells of this particular section of the membrane is transmitted through a mediator to the auditory nerve fibers in the form of a certain number of impulses, the repetition frequency of which is lower than the frequency of sound waves (the lability of nerve fibers does not exceed 800 - 1000 Hz). The frequency of perceived sound waves reaches 20,000 Hz. In this way, a spatial type of coding of the height and frequency of sound signals is carried out.

When tones operate up to approximately 800 Hz except spatial coding also occurs temporary (frequency) coding, in which information is also transmitted along certain fibers of the auditory nerve, but in the form of impulses (volleys), the repetition frequency of which repeats the frequency of sound vibrations. Individual neurons at different levels of the auditory sensory system are tuned to a specific sound frequency, i.e. Each neuron has its own specific frequency threshold, its own specific sound frequency, to which the neuron’s response is maximum. Thus, each neuron from the entire set of sounds perceives only certain rather narrow sections of the frequency range that do not coincide with each other, and sets of neurons perceive the entire frequency range of audible sounds, which ensures full-fledged auditory perception.

The validity of this position is confirmed by the results of human hearing prosthetics, when electrodes were implanted into the auditory nerve, and its fibers were irritated by electrical impulses of different frequencies that corresponded to sound combinations of certain words and phrases, providing semantic perception of speech.

Sound intensity analysis also occurs in the auditory sensory system. In this case, the strength of sound is encoded both by the frequency of impulses and by the number of excited receptors and corresponding neurons. In particular, outer and inner hair receptor cells have different excitation thresholds. Internal cells are excited at a greater sound intensity than external ones. In addition, the excitation thresholds of internal cells are also different. In this regard, depending on the intensity of the sound, the ratio of excited receptor cells of the organ of Corti and the nature of the impulses entering the central nervous system change. Neurons in the auditory sensory system have different response thresholds. With a weak sound signal, only a small number of more excitable neurons are involved in the reaction, and with increased sound, neurons with less excitability are excited.

It should be noted that in addition to air conduction there is bone conduction of sound, those. conduction of sound directly through the bones of the skull. In this case, sound vibrations cause vibration of the bones of the skull and labyrinth, which leads to an increase in the pressure of the perilymph in the vestibular canal more than in the tympanic canal, since the membrane covering the round window is elastic, and the oval window is closed by the stapes. As a result of this, a displacement of the main membrane occurs, just as with the air transmission of sound vibrations.

Definition sound source localization possible with the help binaural hearing, i.e., the ability to hear with two ears at the same time. Thanks to binaural hearing, a person is able to more accurately localize the source of a sound than with monaural hearing and determine the direction of the sound. For high-pitched sounds, the determination of their source is determined by the difference in the strength of the sound arriving at both ears, due to their different distances from the sound source. For low sounds, the difference in time between the arrival of the same phases of the sound wave to both ears is important.

Determining the location of a sounding object is carried out either by perceiving sounds directly from the sounding object - primary localization, or by perceiving sound waves reflected from the object - secondary localization, or echolocation. Some animals (dolphins, bats) navigate in space using echolocation.

Auditory adaptation- This is a change in auditory sensitivity during the action of sound. It consists of corresponding changes in the functional state of all parts of the auditory analyzer. An ear adapted to silence has a higher sensitivity to sound stimulation (auditory sensitization). With prolonged listening, hearing sensitivity decreases. A major role in auditory adaptation is played by the reticular formation, which not only changes the activity of the conductive and cortical sections of the auditory analyzer, but also, due to centrifugal influences, regulates the sensitivity of auditory receptors, determining the level of their “tuning” to the perception of auditory stimuli.

In the organ of hearing there are:

External,

Average

Inner ear.

The outer ear includes the pinna and the external auditory canal, separated from the middle ear by the eardrum. The auricle, adapted for capturing sounds, is formed by elastic cartilage covered with skin. The lower part of the auricle (lobe) is a fold of skin that does not contain cartilage. The auricle is attached to the temporal bone by ligaments.

The external auditory canal has cartilaginous and bony parts. In the place where the cartilaginous part passes into the bone, the auditory canal has a narrowing and bend. The length of the external auditory canal in an adult is about 33-35 mm, the diameter of its lumen varies in different areas from 0.8 to 0.9 cm. The external auditory canal is lined with skin, in which there are tubular glands (modified sweat glands) that produce a yellowish secretion - earwax.

The eardrum separates the outer ear from the middle ear. It is a connective tissue plate, covered on the outside with thin skin, and on the inside, on the side of the tympanic cavity, with mucous membrane. In the center of the eardrum there is a depression (the umbilicus of the eardrum) - the place where one of the auditory ossicles, the malleus, is attached to the eardrum. The tympanic membrane has an upper, thin, free, unstretched part that does not contain collagen fibers, and a lower, elastic, stretched part. The membrane is located obliquely, it forms an angle of 45-55 with the horizontal plane, open to the lateral side.

The middle ear is located inside the pyramid of the temporal bone, it includes the tympanic cavity and the auditory tube connecting the tympanic cavity to the pharynx. The tympanic cavity, having a volume of about 1 cm 3, is located between the eardrum on the outside and the inner ear on the medial side. In the tympanic cavity, lined with the mucous membrane, there are three auditory ossicles, movably connected to each other (the malleus, the incus and the stirrup), which transmit vibrations of the eardrum to the inner ear.

The movement of the auditory ossicles is restrained by miniature muscles attached to them - the stapedius muscle and the muscle that stretches the tympanic membrane.

The tympanic cavity has six walls. The upper wall (tegmental) separates the tympanic cavity from the cranial cavity. The lower wall (jugular) is adjacent to the jugular fossa of the temporal bone. The medial wall (labyrinthine) separates the tympanic cavity from the inner ear.

In this wall there is an oval window of the vestibule, closed by the base of the stapes, and a round window of the cochlea, covered by a secondary tympanic membrane. The lateral wall (membranous) is formed by the tympanic membrane and the surrounding parts of the temporal bone. On the posterior (mastoid) wall there is an opening - the entrance to the mastoid cave. Below this hole there is a pyramidal eminence, inside which the stapedius muscle is located. The anterior (carotid) wall separates the tympanic cavity from the canal of the internal carotid artery. On this wall, the tympanic opening of the auditory tube opens, which has bone and cartilaginous parts. The bony part is the semi-canal of the auditory tube, which is the lower part of the muscular-tubal canal. In the upper hemicanal there is a muscle that strains the tympanic membrane.

The inner ear is located in the pyramid of the temporal bone between the tympanic cavity and the internal auditory canal. It is a system of narrow bone cavities (labyrinths) containing receptor apparatuses that perceive sound and changes in body position.

In the bone cavities lined with periosteum, there is a membranous labyrinth, repeating the shape of the bone labyrinth. Between the membranous labyrinth and the bone walls there is a narrow gap - the perilymphatic space, filled with fluid - perilymph.

The bony labyrinth consists of the vestibule, three semicircular canals and the cochlea. The bony vestibule has the shape of an oval cavity communicating with the semicircular canals. On the lateral wall of the bony vestibule there is an oval-shaped window of the vestibule, closed by the base of the stapes. At the level of the beginning of the cochlea there is a round window of the cochlea, covered with an elastic membrane. Three bony semicircular canals lie in three mutually perpendicular planes. The anterior semicircular canal is located in the sagittal plane, the lateral canal is located in the horizontal plane, and the posterior canal is located in the frontal plane. Each semicircular canal has two legs, one of which (ampullary bone pedicle) forms an extension - an ampulla - before flowing into the vestibule. The pedicles of the anterior and posterior semicircular canals connect and form a common bony pedicle. Therefore, three canals open into the vestibule with five openings.

The bony cochlea has 2.5 whorls around a horizontally lying shaft. A bone spiral plate, pierced by thin tubules, is twisted around the rod like a screw. The fibers of the cochlear part of the vestibulocochlear nerve pass through these tubules. At the base of the plate there is a spiral canal in which the spiral nerve ganglion lies. The plate, together with the membranous cochlear duct connecting to it, divides the cavity of the cochlear canal into two spirally convoluted cavities - scalae (vestibular and tympanic), communicating with each other in the area of ​​the cochlea dome.

The walls of the membranous labyrinth are formed by connective tissue. The membranous labyrinth is filled with fluid - endolymph, which flows through the endolymphatic duct passing in the aqueduct of the vestibule into the endolymphatic sac, which lies in the thickness of the dura mater on the posterior surface of the pyramid. From the perilymphatic space, perilymph flows through the perilymphatic duct passing in the cochlear canaliculus into the subarachnoid space on the lower surface of the pyramid of the temporal bone.



Random articles

Up