Since the 90s of the last century, the personal computer has become a window into the digital world for tens, hundreds of thousands, and then millions of earthlings; Starting from the first decade of the current century, the smartphone began to displace it from this position. But both a PC and a mobile phone, no matter how many megapixels their display matrices contain and how deep the bass is reproduced by the speakers built into them, continue to remain just like windows: the marvelous digital world they display is strictly limited by the screen frames, because of which the user continues to watch the unblinking leaden gaze of reality. Perhaps it is the desire to get rid of the pressure of this view, at least for a while, that stimulates the persistent work of researchers on all sorts of versions of VR, AR, XR and MR – despite the openly restrained reaction of the public to the headsets offered to them, both virtual and augmented in various ways. variants (augmented, extended, mixed) of reality. “Restrained,” however, is putting it mildly. A survey conducted by Security.org in mid-2024 showed that almost half (49%) of Americans surveyed did not bother to purchase a VR headset simply because they did not see any point in it (by the way, only 10% admitted that they experienced dizziness or other unpleasant sensations when trying to try out this type of interface for immersion in digital worlds).
And indeed: although vision and hearing are the senses we most actively use, glasses and VR/AR helmets have already learned to deceive quite well, in the other three areas – namely touch, smell and taste – computer interfaces are still clearly lagging behind. What cannot but influence the perception of a computer-generated artificial (partially – in the case of VR, or fragmentarily – for AR/XR/MR) world: some kind of numeric keyboard can be seen; you can even hear the click of its keys when you press it, if the developers of the virtual environment took care of this. But when working on it, your fingertips will not experience familiar sensations, and it will not emit the characteristic smell of plastic, and even if, in their quest for realism, inventive programmers achieve machine-generated crumbs falling from the upside-down virtual keyboard, you’ll have to try the taste. It won’t work. And the human brain is structured quite paradoxically: it perceives attempts to imitate it obviously far from reality with favor, but the more noticeably this gap narrows, the stronger the distrust – and even the horror of what is observed (see the well-known “uncanny valley” effect).
So, it turns out that full immersion into virtuality is impossible in the absence of interfaces for interacting with the digital world through tactile, smell and taste channels? It is quite likely, and if someday Elon Musk’s Neuralink or another neural interface similar in operating principle learns to directly influence the corresponding areas of the brain, this problem will be solved. However, so far progress in this direction is moving much more slowly than those who want to quickly immerse themselves in virtuality would like: in particular, nanowires that provide physical contact between the interface and the brain are covered with glial cells after some time, which impairs their performance – and there are no drastic measures no counteraction to this has yet been proposed. This means that for the foreseeable future, the good old non-invasive means of human-machine communication will remain on the agenda, which will deceive not the brain itself directly, but the peripheral nervous system in the corresponding sense organs. Surely this will be easier?
⇡#Metaverses taste different
At the end of November 2024, an article by a group of researchers from the City University of Hong Kong appeared in the reputable American academic publication Proceedings of the National Academy of Sciences presenting portable taste interfaces for VR/AR/MR, which are supposed to be made in the form of well-known lollipops. Firstly, it is convenient (in addition, accidentally swallowing such a gadget or even choking on it will not be easy); secondly, this team clearly chose a systems approach – previously its areas of interest were mobile smell and tactile interfaces, which also involve widespread widespread use. The device, which at the stage of laboratory development does not yet have its own name, contains in its top (which the user is expected to lick, wanting to feel the taste of a particular virtual object) mini-containers with agarose gel – this is a particularly pure fraction of the natural linear polysaccharide agar , obtained from sea red algae. Such a gel can contain molecules of various food flavoring additives – at first, researchers experimented with nine, forming the tastes of sugar, salt, citric acid, passion fruit, green tea, milk, durian (only taste, odorless, of course) and grapefruit. Under the influence of an electric current passed through the container, the gel releases bound additive molecules, which migrate to the surface and, mixing with the user’s saliva, create a sensation of taste, which is more pronounced the stronger the activating current.
Strictly speaking, the chemical method of computer taste emulation – delivering flavor additives directly to the user’s taste buds – is not the only one possible. Back in 2018, the startup Multisensory Interactive Media Lab offered electronic chopsticks – a fairly simple device in which conductive sticks end with a pair of electrodes that stimulate these same papillae through saliva with weak currents – and thus create the illusion of the appearance of a salty taste, for example in food to which no salt was added during cooking. Judging by the fact that the extravagant startup did not survive to this day, the idea did not justify itself – especially since its founder Nimesha Ranasinghe admitted early on that he was having difficulty reproducing perhaps the most popular taste on the planet – sweet. His electronic wands were able to create sensations of sour, salty, even bitter, using different combinations of amplitude and frequency of alternating current, as well as different materials for the tips of the electrodes. It would not be superfluous to point out that the pseudo-theory, as they say, widespread among the masses about the selective sensitivity of different zones of the tongue to different taste sensations, is fundamentally incorrect: scientists still cannot answer with complete certainty how exactly the extremely complex taste buds work. And, apparently, direct electrical stimulation of the papillae is not enough to adequately transmit the entire palette of tastes perceived by a person.
The Hong Kong developers of the digital lollipop therefore focused precisely on the chemical means of shaping the user’s taste sensations – and, according to the published results, they achieved their goal. Their interface was clearly not burdensome (the body made on a 3D printer is 8 cm in length, 15 g of fully loaded weight). Versions with less than 9 number of simultaneously reproduced flavors allow you to use more nutritional additives for each container – which, in turn, increases the maximum achievable intensity of the selected flavor or the duration of use of the refill. Tests have shown that in the operating scenario chosen for the experiment (obviously more intensive than the practical use of such an interface even in specific VR games might imply), one such block lasts for about an hour. According to the developers, the artificial taste pleasure will not be too expensive, since all its chemical components are mass-produced in huge volumes, so the cost of mass production should be low. Another thing is whether such production will be established? In other words, is there (or will there be in the foreseeable future) any significant demand for computer taste interfaces?
⇡#Chu! There was a whiff of digital violets…
Chemical receptors – both taste and smell – are among the most ancient in the arsenal of living organisms and therefore are extremely complex: the number of various receptor cells on the mucous membrane of the human nose reaches 50 million, on the tongue – 400-500 thousand. Individual such cells react to contact with certain molecules, producing an exciting impulse; then these nerve impulses arrive at special neurons, each of which is associated with a whole host of receptors – each with its own. As a result, any natural aroma (formed, as a rule, not by one particular molecule, but by a whole combination of them – in contrast to the “chemically pure” smell of artificially created mono-substances) initiates a special combination of impulses in the neural network associated with receptor cells, unique to it, – and the corresponding pattern of neuron activation is associated in the brain with a particular aroma. At a descriptive level, this scheme is not so complicated, so the idea of creating an “electronic nose” for machine discrimination of especially important odors (methane in a mine, hazardous substances in luggage, etc.) has long been in the air – and is already taking shape in the form of those used on practice of artificial aroma detectors. But it’s one thing to identify natural odors using some kind of digital system, and quite another to generate aromas for perception by a person immersed in virtuality.
In 2023, Nature Communications published a report from a group of Chinese researchers about the creation of a prototype of a miniature wireless olfactory interface, with a focus on its use in conjunction with virtual reality headsets. According to the principle of operation, the device is similar to the taste interface we discussed earlier, but since odors spread through the air, direct contact with the user’s nasal mucosa is not required – it is enough to learn how to somehow deliver air saturated with flavors closer to the nasal cavity. Strictly speaking, on a more substantial scale (cinema halls, for example), analog odor-producing units called Smell-O-Vision and AromaRama were used in the USA back in the middle of the last century, and their direct predecessors operated in British theater halls back in the 1860s – distributing, however, fragrances not in accordance with the action unfolding on stage, but solely for the purpose of advertising the sponsor of the performance perfume company. Current aroma generators are also essentially analogue – since aromatic receptors cannot be influenced by digital methods; give them natural molecules of a strictly defined configuration – but at least such devices are controlled by appropriately programmed computer systems.
For the first time, wide circles of developers around the world began to show interest in computer virtual reality (going beyond purely audiovisual means) in the mid-1990s – this time includes, for example, the Ferris Productions aroma-generating system used in American amusement parks or its collaborative work with HUD displays analogue from the British BOC Group. For naturalistic training of firefighters in the United States, around the same time, they began to use operator-controlled scent interfaces that immersed cadets (but not instructors monitoring their actions) in the unique atmosphere of burning wood, petroleum products and/or rubber. Towards the end of the first decade of the 2000s, the baton was picked up by the Japanese, who proposed an aroma generator with electromagnetic valves, Olfactory display, specifically for personal viewing of videos or playing games (culinary-oriented primarily) – and supplied it with 32 “basic” aromas, the individual emission or mixing of which made it possible to reproduce a fairly wide palette of smells. In 2018, the device was improved, equipped with an atomizer (dispenser of a liquid aromatic substance) using ultrasonic waves. The modified “aroma display” became more compact, but still turned out to be too large to be integrated into a headset.
The same problem has plagued other similar projects like the Cilia system developed by the Texas startup HapticSol (offered since 2020 as a desktop gadget – or a system with reduced capabilities, but worn around the neck like a jewelry pendant), for which there are even compatibility plugins with engines Unity and Unreal Engine, although, of course, not all games are capable of using them. More recently, in 2022, Arizona State University presented perhaps the most advanced (in terms of applicability in VR systems) development under the simple name The Smell Engine – it is made in the form of a small mask covering the nose (like a hospital oxygen) and physically compatible with commercial virtual reality headsets available today. Competent software integration of The Smell Engine into the digital world – through the Unity engine, in particular – allows you to create smells with an intensity that depends on the distance to the digital object that is supposed to be sniffed by the user. True, there is a rather thick tube stretching from the aroma generator itself to the nasal user interface, which is quite acceptable in the case of an experimental sample, but is obviously not suitable for a commercial device.
The Chinese development mentioned a little earlier in 2023, which was reported in Nature Communications, in the future may turn out to be a more successful basis for aromatic immersion in virtual worlds – since it is a set of freely combinable unitary odor generators (olfactory generator, OG), each of which responsible for the formation of a single aroma. A separate such generator is a thin plate of approximately 1.5×1.5 cm containing an aromatic liquid and a heating element; The heating power (and, accordingly, the strength of the aroma) is controlled by a wireless digital system. And then there is wide scope for the designers’ imagination: you can place a couple of different OGs on a narrow plate, placing it right under the user’s nose in the manner of a carnival false mustache; you can collect nine such elements on the inner surface of a mask that covers your nose and mouth, etc. There are, of course, plenty of problems with this approach: just getting rid of the molecules of a previously released aroma while quickly moving on to the next sniffed virtual object is worth it, and even with all Due to the compactness of the proposed OGs, they certainly cannot be called miniature. And yet, progress towards olfactory virtual reality is clearly observed, although full-scale immersion in it is still far away.
⇡#Hands – touch!
Compared to tastes and smells, transmitting tactile sensations to the user through a computer interface seems to be an almost trivial task: many modern games use the tactile capabilities of specialized controllers (simply put, vibration activated at certain moments), and even for advanced smartphones tactile feedback when interacting with The on-screen interface is quite typical. Actually, back at the end of the last century, almost simultaneously with the first VR helmets, computer gloves appeared as a human-machine interface for conveniently operating objects in the digital world, including with feedback implemented by the same vibration-creating servomotors. There were even such curious projects as a sensory vest for chickens, which makes it possible to pet the bird wearing it via the Internet by touching a bird figurine dotted with appropriate sensors and paired with the vest. In a word, as long as there is a hand (or back, if we talk about pads on gaming chairs with tactile feedback; or simply a piece of skin to which tactile influence is transmitted through a special patch) and some object in contact with it, which can be equipped with servos, including very highly accurate, no problems arise with the tactile perception of virtuality.
Difficulties begin when there is simply no suitable object: say, tactile feedback in the case of a smartphone is most often implemented by vibration of the entire device – simply because placing dedicated servos under individual sections of the display is both too expensive and not very practical: the thickness of the gadget’s body is too definitely won’t reduce it. However, it would be much more useful and convenient if under the user’s fingers the flat screen suddenly acquired volume exactly where that finger presses – after all, tactile feedback (especially when typing, but this is far from the only application of such technology) greatly facilitates interaction with objects of the digital world displayed on the display. Similar developments have been going on for several years, and one of the most recent among them was made at Carnegie Mellon University – this is a fairly thin layer of Flat Panel Haptics, which can be placed directly under the (flexible, of course) OLED matrix. In this case, the not always practical ability of a smartphone display to change its geometry without losing its basic operating properties turns out to be more than appropriate: the electroosmotic pumps introduced into Flat Panel Haptics pump a special liquid into the desired area of the screen, under the influence of which the thin layer swells – forming a bulge with a height of up to 1.5 mm and from 2 to 10 mm in diameter, and the feel is sufficient solid. The formation (as well as the disappearance) of each such dynamic button takes about 1 s, which is quite enough for the vast majority of applications, from on-screen keyboards to interactive elements of mobile games. The project still has room to develop – inIn particular, it would be good for its successful promotion to attract the attention of developers of popular applications and designers of new smartphones at the same time – but in general, this approach to giving tactile depth to digital objects on screens seems quite promising. Moreover, humanity, having closely interacted with touch screens since the early 2010s, is gradually realizing more and more clearly the importance and value of the good old “push-button” interfaces that reinforce visual feedback with tactile feedback – and developers, for example, of automotive control systems have recently clearly follow this trend.
Another question – what to do if there is no screen at all? In the same virtual reality, sophisticated touch gloves can come to the rescue, the design of which will prevent, for example, the free squeezing of the hand if the user takes a certain virtual object into it – thus the person will get the feeling of physically holding some digital object. Yes, you will have to tinker with fine-tuning this sensation: say, a swing with a real sword, due to the considerable inertia of its blade, will respond not only in the palm gripping the handle, but also in the muscles of the entire hand, while a virtual sword weighs nothing (although it is confidently held thanks to the glove not completely closed due to the counteraction of the servos), and therefore the impressions from swinging it will be completely different. But, again, since “glove” tactile interfaces are physical objects directly adjacent to the user’s body, an acceptable solution will certainly be found.
Is it possible to do without a material layer at all between the digital image and the human organs of sensory—in particular, tactile—perception? Strictly speaking, it will not be possible to completely abandon the mediation of matter – our body is far from the direct perception of Plato’s ideas – but it is quite possible to move from rough physical objects to somewhat more ethereal environments. This refers to the method proposed by a joint group of researchers from several Japanese universities for forming three-dimensional visual images directly in the air using femtosecond lasers. At the focusing point of such an emitter, the molecules of the gases that make up the air are ionized and excited to the state of plasma, emitting fairly short-wave photons, which are perceived by the human eye as a bluish-purple glow. It is interesting that if the same procedure is performed with lasers generating nanosecond pulses, the resulting plasma turns out to be too long-lived (each voxel, i.e., three-dimensional pixel of the image formed in the air retains its state for the same nanoseconds) – and therefore dangerous for humans , and for objects in the environment: nanosecond laser-induced three-dimensional displays are simply dangerous if the necessary precautions are not taken.
With femtosecond (1 fs = 10−15 s) everything is much simpler (although technically they themselves are more complex): the Fairy Lights system created by the Japanese generates voxels with infrared lasers, each pulse of which lasts up to several tens of femtoseconds; Accordingly, the plasma microcloud at the focusing point does not have time to acquire such a high temperature (due to the short period of time of pumping it with energy) to damage some physical object or burn the skin of a person holding out his hand to a voxel image hanging in the air. Moreover: if the beam of such a laser hits, say, the finger of a person who wishes to touch an ethereal picture, other molecules are converted into plasma, which shifts the hue of the picture’s glow to the warm part of the visible spectrum and increases its brightness (since the density of molecules on the skin is clear higher than in the air), so the voxel image clearly and without any additional tricks responds to touch. Moreover: shock waves, inevitably formed when supersonic plasma particles collide with the same finger of an inquisitive experimenter, exert purely physical pressure on the skin – and a person feels, albeit weakly, but completely clearly, a touch on a seemingly immaterial entity! It is clear that if Fairy Lights is properly paired with an augmented reality headset, in the future it will certainly be possible to create full-color, photorealistic objects that respond to touch – so, it is likely that those potential users who are today basically notimagine what they should do in virtual worlds.
Taste, smell and tactile interfaces are an extremely attractive area of VR/AR/MR/XR development, but it also poses many challenges to researchers. Miniaturizing them is only one side of the issue; protection against unfair use is another, no less important. For example, if hacking a flavor or aroma generator, in the worst case, will only lead to the fact that some virtual bananas will start to smell like strawberries (because introducing the aroma of, for example, durian into the container of an ultrasonic atomizer is only possible with pens; a computer virus will not help here) , then a possible disruption of the femtosecond laser – so that it begins to generate nanosecond pulses that are dangerous to health – will lead to not the most pleasant consequences. In any case, any reliable virtualization of the senses that lie beyond sight and hearing remains a matter of a relatively distant future. In addition, it is difficult at the moment to say which problem will be solved faster: this or the creation of a fully functional and not subject to contact degradation (in the sense of the inevitable overgrowth of glial cells with a decrease in the quality of signal exchange with nervous tissue due to this) invasive neural interface. Wait and see!