What are our human memories made of?

Division of labor in the brain: memories are everywhere

For the brain, calling up a memory is a kind of time travel, because even before a thought of a special event reaches consciousness, the same areas of the brain are activated as at the time when this remembered event took place. In doing so, the brain follows a very sophisticated strategy, because initially it only calls up memories of general information from the environment of the event, which then call up memories of more and more details until the entire activity pattern is restored - although the brain can definitely make mistakes here or Fills in gaps.

So memories are always through Networks of many nerve cells held. Another one Functional principle of memory: division of labor.

Example: The memory of a pencil. The information about the color, shape and function of the pen is stored in different places in the brain. They seem to be assigned to the brain regions that are also responsible for the perception of the corresponding property. The color of the pen is processed in a different location than, for example, the cylindrical shape. The memory works like an orchestra: the violins are responsible for the color of the pen, the flutes for the shape, the timpani for the function. All together, in a split second, the image of the pen emerges in the mind's eye.

The question is, how does the brain know that the different pieces of information belong to the same object? There is no such thing in the brain as a conductor who has everything under control with his baton. The researchers' suspicion: The decisive factor is the rate at which the nerve cells fire. For example, all the nerve cells that are occupied with remembering the pencil fire fifty times a second. Other cells, remembering a sheet of paper, only discharge thirty times. This would ensure that information details that are far apart can be combined into an overall picture.

Old and young memories

Christine Smith & Larry Squire (University of California) asked test subjects 160 questions about news from the past 30 years and measured their brain activity. When asked about more recent events, the hippocampus, which brings together and processes sensory impressions, was particularly active, but the further back the events, the less they activated the hippocampus. In events more than twelve years ago, this brain region no longer played a role, but areas in the cerebral cortex came into play and became more and more active the older the memories of the events were. It was completely irrelevant how detailed the test subjects were able to remember the events or whether they also associated personal memories with an event. These regions of the cerebral cortex are believed to be the ultimate storage space for long-term memories.

Memories are mainly stored in the cerebral cortex, whereby the hippocampus and the immediately adjacent entorhinal cortex are involved in memory formation, because this is where information flows together and is processed. Maas et al. (2014) were able to assign memory formation to specific neuronal layers within the hippocampus and the entorhinal cortex, whereby it was possible to observe which neuronal layer was active in each case, which allowed conclusions to be drawn as to whether information flowed into the hippocampus or from the hippocampus into the cerebral cortex got. By ascertaining this directional information, it was possible to show where the entrance to memory is located in the brain. For the study, the brains of test subjects who had made themselves available for a memory test were examined. With the help of ultra-high-field magnetic resonance tomography at 7 Tesla, the activity of individual brain regions could be recorded with unprecedented accuracy.

That too immune system the body has Memory cellsthat remember contact with a pathogen and ensure that the immune system can quickly and efficiently trigger a defense reaction in the event of recurring contact with the same pathogen. Presumably, this immunological memory also ensures that vaccinations have a protective effect for years. According to studies by Basel immunologists, certain components of the immune system begin to differentiate into two types in the event of an infection: On the one hand, they take part in the actual defense reaction as short-lived effector cells, and on the other hand, long-lived memory cells arise that store the specific immune reaction. The T-cell receptor, which is located on the surface of T-cells, plays an important role in this process.

The latest research shows that the immune system is able to learn and function while sleeping Memory for pathogens to develop. Obviously, the immune system uses sleep to do just that Immune memory to shape. When the immune system is confronted with a pathogen, i.e. an initial infection has occurred, macrophages and other immune cells rush to the pathogen, eat them up and present fragments of their meal to the lymphocytes. The lymphocytes then divide and, as part of these division processes, form cells that develop specific antibodies against the pathogen, including memory cells that remember the pathogen. So when the pathogen re-enters the body, these memory cells react very quickly and can prevent it from spreading at an early stage. After an initial infection, the immune system stores knowledge about the pathogen so that it can fight it faster and more efficiently if it is infected again. A number of indications suggest that deep sleep is fundamentally important for this type of memory formation in the immune system. This was confirmed in an experiment in which test subjects who had slept after a vaccination and who spent a lot of time in deep sleep still had considerably more antibodies in their blood a year later than the members of a control group who stayed the night after the vaccination stayed awake.

Literature & sources
Interview with Jan Born, sleep researcher at the University of Tübingen.
WWW: http://www.swp.de/ulm/nachrichten/wissen/ Mensch / Lern-im-Schlaf; art1185449,1975802 (13-05-01)

Science 2009

Maass, A., Schütze, H., Speck, O., Yonelinas, A., Tempelmann,., Heinze, H.-J., Berron, D., Cardenas-Blanco, A., Brodersen, KH, Enno, S. & Düzel, E. (2014). Laminar activity in the hippocampus and entorhinal cortex related to novelty and episodic encoding. Nature Communications, doi: 10.1038 / ncomms6547.

One of the central puzzles in neuroscience is the attempt to answer the question of how the brain manages to recreate the bundle of information that flows in via the various sensory organs to put together to an experienced whole and to generate holistic scenic perceptions and ideas. In order to ensure orientation in the environment, the brain must extract (sufficiently) reliable information from the world and convert it into behavior. As a rule, there are five properties of environmental stimuli that are important for the brain.

  • the modality a stimulus (sight, taste, hearing, etc.)
  • the quality (when seeing e.g. color, brightness etc.)
  • the intensity (Volume)
  • the Time structure of stimulus and
  • whose place.

The coding of the intensity is brought about by the discharge pattern of individual nerve cells (depolarizations). The more intense the stimulus, the higher the discharge frequency of the affected nerve cells. When encoding sensory modality and quality, the "principle of the place of processing" applies: The place where a certain excitation is processed determines its subjective perception. The brain interprets as "seeing" that which excites those areas of the cerebral cortex that are responsible for visual information processing. Accordingly, that is interpreted as "hearing" what excites the auditory cerebral cortex areas, etc. This was already in the 19th century by Hermann von Helmholtz postulated. See also: Donald Olding Hebbs Hypothesis for "Attachment problem in neuroscience": How does the brain" know "that certain represented properties belong to one and the same object, ie how are shape, color, smell, etc., which are processed in different brain areas, merged? The latest hypothesis is: The bond of neurons is achieved by simply being active at the same time - in other words, their excitation patterns should run synchronously, something that has already been demonstrated in the brains of cats and monkeys.

This integration of individual nerve impulses into coherent wholes, the binding, is an umbrella term for the bundling of sensory data from the individual receptive fields, but also for the integration of different sensory modalities into uniform perceptual impressions. There are a number of features that are important for visual perception (color, shape, surface structure, distance, spatial orientation and direction of movement). There are respective receptive fields for these individual characteristics. For example, 30 different receptive fields for the sense of sight have been found in monkeys and 20 in cats. A decisive step for recognition is the determination of which features belong to an object, to a figure. The recorded features must then be bound to an object, because unbound features cannot go into the working memory. This process of segmentation into related object areas is, for example, the prerequisite for a figure-background differentiation (an example is the well-known picture of the dalmatian sniffing the ground on the background of the same color).

Incidentally, the brain turns numerous tasks in complex activities such as the human soul Tricks at, to process optical information quickly and with little effort, because this is how certain neurons in the cerebrum react specifically to edges so that the outlines of objects can be identified quickly. This even works when objects partially obscure one another, because there are a large number of specialized cells, including those that respond to rounded features. Bornschein et al. (2013) have now found that the behavior of such brain cells can be described very well in neural models if the overlap between the objects is taken into account.

The claim by some experts that brain researchers estimate that the average person uses only about 10 percent of the total capacity of their brain is nonsense, because there is no serious neuroscientist who claims it in this form. The view that only ten percent of the brain cells are really needed and the rest supposedly slumbering as a hidden reserve is one of those Myths from the world of sciencethat just won't go extinct. How this misjudgment came about cannot be proven, but it was probably psychologists at the beginning of the 20th century who spread the theory that only a small percentage of our mental potential is used, from which incorrect conclusions were drawn about brain activity. However, there had never been any scientific publications on this subject, although modern imaging processes also seem to convey this impression. because recordings from the functional magnetic resonance tomograph always show only small areas of the brain that are active during an observed activity. However, this interpretation is wrong, because in fact the entire brain is always active, with the areas shown being particularly active at the point in time measured. There are probably a large number of motivational speakers, guide writers and psycho-coaches who this myth plays into the hands of.
Incidentally there Douglas Adams in "The Electric Monk" the appropriate explanation: "She had once heard that humans probably only use about one-tenth of their brains and that no one knows exactly what the other nine-tenths are for, but of course she had never heard any hint that they were Keeping penguins would be used. ";-)

 

The 7 criteria by which such a Shaping takes place in the brain, were already discovered in the 20s and 30s by Gestalt psychology (especially by Koehler): continuity, closeness, similarity, "common fate", unity, "good continuation" and symmetry. However, the segmentation also depends on the observer's attention and prior knowledge of the situation.

Wolf Singer and Engel (Max Planck Institute for Brain Research) have been trying for some time physiological correlates of gestalt formation to find, which for a long time proved difficult, since the earlier hypothesis of attachment neurons (features converge on a single neuron) was limited to a representation of elementary features. Although individual neurons were found in the visual cortex for light / dark differences, color, direction of movement, relative distance from objects, no "grandmother neuron", a binding neuron for complex objects like a whole grandmother, was found. How uneconomical such a mechanism would be quickly becomes clear when one considers how many different objects would each need a binding neuron. It was obvious that a second mechanism would have to be found for complex features. The idea of ​​how this mechanism could work comes from Milner (1974) and neuro-computer scientist Christoph von der Malsburg (1981): Neurons can be time-coded by synchronizing their impulses Assemblies (according to Hebb 1949). These then represent an object. The time correlation should have an accuracy of a few thousandths of a second. A related object should be able to be detected by the neurons having exactly the same rhythm. Conversely, neurons that do not fire in the same cycle should then form different objects and thus enable segmentation and figure-background differentiation. This hypothesis could essentially be confirmed, whereby there are not only temporal synchronizations between receptive fields of the same area, but also between centers for visual perception in the occipital visual cortex and between the left and right hemisphere in the visual cortex. Such correlations have now been confirmed in and between many cortical and subcortical areas. It was also shown that these temporal correlations can be influenced by changed stimuli and that the synchronizations only occur in a strong form when the neurons respond to the same object. In the case of various objects, the temporal coupling becomes weaker or disappears completely (according to Held, n.d.).

The brain saves energy by building modules

In a study of musicians and non-musicians, it was found that the human brain saves computing power by storing difficult movements in such a way that they can be called up quickly and fairly easily when necessary. During their examinations one observed the finger movements of violinists and pianists and recorded how they performed certain fingerings on their instruments after years of training and examined the resulting regularities of the Movement pattern. Then, by means of transcranial magnetic stimulation (strong magnetic fields), the finger movements were triggered directly by stimulating the cerebral cortex. The finger movements triggered in a completely relaxed state had characteristics that were directly connected to the finger movements that had been trained over many years. Obviously, the corresponding abilities are in the brain modular as a type of memory chip so that it is possible for the musicians to perform the special movements with greater ease, precision and smoothness. Such memory modules of the brain change only gradually and very slowly during particularly difficult tasks long training, whereby musicians are known to practice for hours every day for many years.
This Working principle of the brain Presumably applies not only to the movements of musicians, but is also valid for other people and other motor skills, for example typing or on the assembly line. By storing recurring processes in building blocks, the brain saves valuable time and energy and thus prevents it from having to start over with the analysis of a situation every time.Perceptual processes such as visual perception are also presumably simplified by such storage processes in the brain.
source: http://www.zv.uni-leipzig.de/service/presse/pressemmeldung.html?
ifab_modus = detail & ifab_id = 3915 (10-10-27)

The digital language of the brain

As is well known, the universal language of the brain consists of electrical impulses, the spikes, whereby every single one of the millions of nerve cells in the human brain can either send out a spike or remain calm at any point in time. So the brain represents information about the world in a very similar way to a computer in a binary code, zero or one, spike or not spike. Neuroscientists can measure the activity of dozens of neurons at the same time, but it has not yet been clarified which properties they are binary pattern resulting from the spike activity of the nerve cells. For a long time, attempts have been made to model the signal patterns of the brain using statistical methods in order to understand how sensory perceptions are coded at the nerve level. Among other things, this is used Ising model from physics, which describes how a large number of ferromagnetic particles develop a collective behavior that ultimately determines the magnetism of a material. Studies have now shown that the Ising model can in some cases provide surprisingly precise descriptions of the activities in a nerve population, but in some cases the model failed in characteristic ways. A research group led by Jakob Macke (Macke et al., 2011) has shown in experiments that there are apparently common input signals that arrive at all neurons, but that may not have been observed directly in the experiment, i.e. that the neurons of the visual system are important Receiving signals from other neurons outside the system under observation and thus disrupting the validity of the Ising model.

Correlation of neural activity through rate of fire and timing

In the brain, the activity of the billions of neurons is “correlated” because this is the only way the brain can perform amazing things like listening to music or reading a text. Each neuron in the cortex receives information from around 30,000 other neurons and sends individual neuronal impulses in response. The various electrical input signals that a neuron receives lead to fluctuations in the voltage across its membrane, and as soon as the membrane voltage reaches a certain threshold value, the neuron itself sends out a signal. So far have been two theories develops how the brain encodes information in the electrical activity of neural signals, on the one hand via the fire rate, on the other hand about that exact timing of a neural impulse relative to other signals.

Fred Wolf et al. (Bernstein Center for Computational Neuroscience and Max Planck Institute for Dynamics and Self-Organization) was able to show in studies how and under what conditions these correlation comes about, namely the neural conversion of input signals into output signals takes place according to a relatively simple mathematical formula, with the conversion of voltages into digital signals similar to microprocessors is working. The correlation between the response signals of two neurons depends not only on how similar they are, but also on how active the cells are. If the neurons send many signals in quick succession (high rate of fire), the response signals are also more strongly correlated. However, this only applies if the neurons only share a fraction of their input signals. The rules change drastically when the neurons are largely stimulated by common input signals and they accordingly produce similar response signals. In this case, the rate of fire does not matter. Different neurons in the visual cortex specialize in certain aspects of image processing: They react to color, brightness, orientation or direction of movement. There are many indications that cells which encode the same object synchronize their signals so that related information is passed on together.

These statements from their mathematical model could be directly confirmed experimentally by stimulating cells with brain waves simulated in the computer and measuring their respective response signals. Obviously, the two concepts of neural coding mentioned are closely related.

Resonance connects distant areas of the brain

It is a complex process how nerve cells in the brain communicate with one another over long distances, because, as networks of nerve cells are interconnected and individual cells react to impulses, transmission over a greater distance is actually impossible, mainly because of the strong connections between far distant areas of the brain have not yet been found.

According to the latest research with computer simulations (Hahn et al., 2014), a confirmation has now been found for one that had long been suspected global mechanism in the brain, which sets brain areas in interconnected oscillations. It was discovered that resonance could be the key to long-distance communication in networks that, like the brain, have relatively few and weak connections. Not all nerve cells stimulate others to become active; some also have an inhibiting effect. The Interplay of excitement and inhibition can make the activity in a network oscillate around a certain value. Networks usually have a frequency at which the vibrations are particularly strong, just as a taut violin string has a preferred frequency. If the activity oscillates at this frequency, pulses spread much further. It is now believed that in certain cases resonance amplification with vibrating signals may be the only way to communicate over long distances, with the brain processing information in different ways at different times due to the ability of a network to change its preferred frequency can.

Does the critical state of the neural networks decide about selective perception?

Rustling leaves, light rain on the window, a quietly ticking clock, dull noises, just above the hearing threshold, are perceived once in one moment and not in the next, even if you or the tones have not changed. Studies have shown that an incoming stimulus, such as a sound, an image or a touch, is processed differently in each case, even if the stimulus is exactly the same. This is because the extent to which a stimulus activates the relevant brain regions depends on the current state of the networks to which these regions belong. However, it is unclear what influences this constantly fluctuating state of the networks and whether this occurs randomly or follows a rhythm. Stephani et al. (2020) have now found out how this processing works, with a critical condition playing a decisive role in this. These relationships were investigated using thousands of small, consecutive electrical currents that were applied to the forearm of the participants in order to stimulate the main nerve in the arm. These stimulations in turn led to an initial reaction 20 milliseconds later in the somatosensory cortex, and the EEG pattern shows how easily each individual stimulus excites the brain. The brain reacts more strongly to a stimulus, the stronger the networks can be stimulated at the moment in which the stimulus information enters the cortex. Depending on the condition, the nerve cells in the primary somatosensory cortex are easier or more difficult to excite, whereby the excitability determines how the stimulus is processed further, i.e. In other words, it already influences how the brain deals with a stimulus at the entrance to the cerebral cortex and not only at higher, downstream levels.

There is always a certain amount of activity between the neurons of a network, even if no external influences appear to be acting on it, i. In other words, the system is never completely inactive. Rather, they constantly receive information, for example from inside the body, because they watch over the heartbeat, digestion or breathing, over the position in space and internally generated thoughts. The neurons are active even when they are isolated from any input, so that these internal processes constantly influence the excitability or readiness of various brain networks. Their dynamics ultimately determine the excitability of the system and thus also the reaction to a stimulus. However, it is not left to chance how strongly the cortex is excitable, because the change between lesser and greater irritability follows a certain temporal pattern, whereby the current state depends on the previous one and in turn influences the subsequent one. One speaks here of a long-term time dependency or a long-lasting autocorrelation. The fact that the cortex varies in its excitability suggests that its networks are close to a so-called critical state, i.e. that is, they always fluctuate in a delicate balance between excitement and inhibition. This critical state may be decisive for brain function, because it allows as much information as possible to be transmitted and processed, so that this balance could also determine how the brain processes sensory influences. It is believed to serve as an adaptation mechanism to cope with the variety of information that is constantly coming in from the environment; That is, a single stimulus should neither excite the entire system at once nor go away too quickly.

However, it is still unclear what this will do for them subjective perception means, because other processes will probably also play a role here, such as attention. If you direct this to something else, the incoming, less noticed stimulus can still cause an initial, strong brain reaction, but higher downstream processes in the cerebrum could then prevent it from being consciously perceived.

Division of tasks between the hemispheres of the brain

Floegel et al. (2020) have the division of tasks between the two halves of the brain in Speak examined, with subjects having to speak while their brain activity was recorded using functional magnetic resonance imaging. It is well known that when people speak, they need both halves of their brains, each of which takes on part of the complex task of forming sounds, modulating the voice and checking what is being said. It was shown that not only the right hemisphere analyzes how people speak, but also the left hemisphere contributes to it.

Up until now it was assumed that the spoken word originates in the left hemisphere and is analyzed by the right hemisphere, which would mean, for example, when learning English and practicing the “th”, that the left hemisphere controls the interaction between tongue and teeth , while the right one checks whether the sound produced really sounds the way it was intended to be shaped. This study has now shown that while the left hemisphere of the brain controls temporal aspects such as transitions between speech sounds during speech control, the right hemisphere of the brain is responsible for the sound spectrum. For example, when you say “mother”, the left hemisphere preferentially controls the dynamic transitions between “th” and the vowels, while the right hemisphere prefers to check the sound of the sounds themselves.

A possible explanation for this form of division of labor between the two hemispheres would be that the left hemisphere generally analyzes fast processes, such as the transitions between speech sounds, better than the right one. The right hemisphere could better control slower processes that are needed to analyze the sound spectrum. That this is indeed the case is evident from a previous study on Hand motor skills by Pflug et al. (2019). The aim of this study was to clarify why people prefer the right hand for fast processes and the left hand for slow processes, which is the case, for example, when cutting bread when the right hand saws with a knife and the left hand holds the bread. In this experiment, right-handed subjects were made to tap with both hands to the rhythm of a metronome, with one variant knocking every beat and the other only knocking every fourth. It was found that the right hand was more precise in the rapid stroke sequence and the left hemisphere, which controls the right side of the body, showed increased activity. Conversely, the tapping of the left hand was more in tune with the slow rhythm and the right hemisphere was more active.

Overall, it was found that complex behavior such as hand motor skills and speaking is controlled by both hemispheres of the brain, with the left hemisphere preferentially controlling the fast ones, while the right hemisphere controls the slow processes in parallel.

 

literature

Bornschein, J., Henniges, M. & Lücke, J. (2013). Are V1 simple cells optimized for visual occlusions? A comparative study. PLoS Computational Biology 9 (6): e1003062. doi: 10.1371 / journal.pcbi.1003062

Floegel, M., Fuchs, S. & Kell, C. A. (2020). Differential contributions of the two cerebral hemispheres to temporal and spectral speech feedback control. Nature Communications, doi: 10.1038 / s41467-020-16743-2.
Hahn, G., Bujan, A.F., Frégnac, Y., Aertsen, A. & Kumar, A. (2014). Communication through resonance in spiking neuronal networks. PLoS Comp. Biol. Doi: 10.1371 / journal.pcbi.1003811

Macke, Jakob, Opper, Manfred & Bethge, Matthias (2011). Common Input Explains Higher-Order Correlations and Entropy in a Simple Model of Neural Population Activity. Physical Review Letters, 20/106, May 20, 2011, DOI 10.1103 / PhysRevLett.106.208102.

Without author (2010). Nerve cells take care of their neighbors. Max Planck Institute for Dynamics and Self-Organization.
WWW: http://www.mpg.de/bilderBerichteDokumente/dokumentation/pressemitteilungen/
2010 / press release201002011 / index.html (10-02-07)
Pflug, A., Gompf, F., Muthuraman, M., Groppa, S. & Kell, C. A. (2019). Differential contributions of the two human cerebral hemispheres to action timing. eLife, doi: 10.7554 / eLife.48404.

Stephani, T., Waterstraat, G., Haufe, S., Curio, G., Villringer, A. & Nikulin, V. V. (2020). Temporal signatures of criticality in human cortical excitability as probed by early somatosensory responses. Journal of Neuroscience, doi: 10.1523 / JNEUROSCI.0241-20.2020.

Tchumatchenko, Tatjana, Malyshev, Aleksey, Geisel, Theo, Volgushev Maxim & Wolf, Fred (2010). Correlations and Synchrony in Threshold Neuron Models. Physical Review Letters, Vol. 104, No.5 (February 5, 2010).

Sources and Literature



content :::: message :::: news :::: imprint :::: data protection :::: author :::: copyright :::: quote ::::