01: Augmented Reality Technologies
02: Haptic Smart Clothing
03: Augmented Reality Controls
04: Eye Tracking
05: The Plant
06: Universal Remote Control
07: AI, Other Absent Stuff, And Living Outside's Timeframe
08: Ataraxia - Example Game World
We already have basic audiovisual augmented reality technologies. There are smart phone apps which add virtual overlays to show where your friends are, or where to find food. More immersive technologies are on the way. Implanted augmentive technologies like neural implants offer the most potential (see tech notes 05 and 06), but non-implant devices will be common first.
For augmenting vision, bionic contacts like Mike's could potentially change their user's vision into anything their eyes could see, and visually alter their environment at will.
Virtual Retinal Display glasses have been used for some time. They paint images directly onto the retina with lasers. How cool is that? They do have limitations. They might have problems if you're running, and may not be as immersive as bionic contacts could be.
For augmented hearing, there will be something along the lines of Mike's "ear inserts". You could do a bunch with these. Mike's ear inserts fit snuggly and comfortably into his ear canals, and provide the best quality 3D sound the human ear can perceive. They can be used: to play or mix digital sounds with the environment; as ear plugs; to control volume; to filter certain sounds away and amplify others; to record sounds; possibly for picking up subvocalizations for discrete communication; and for eye tracking (see see tech notes 04).
As of 2011, almost 200,000 people have received cochlear implants. These directly stimulate the auditory nerves in the cochlea, and restore at least some hearing. They are sometimes also used to reduce or eliminate tinnitus.
Bionic contacts and ear inserts obviously need power to run. Ear inserts might be able to run on a battery for a while, but batteries and contacts don't really mix. Luckily, there are several forms of wireless energy which can be used to keep contacts, ear inserts, and internal implants running. Magnetic inductive charging and rf radiation are examples.
But what is providing wireless energy for these devices? Various places in a home or office could be outfitted with transmitters, transmission could be built into cars, bags, helmets, or even provided by batteries woven into your own clothing. Clothing built to take advantage of the piezoelectric effect might even generate some or all of the required energy from body movement.
Haptic technology involves tactile feedback- providing touch, pressure, heat, resistance, etc. Neural implants will eventually be able to provide more direct feedback through the nervous system, but haptic technology has a lot of potential.
Mike's clothes seem pretty ridiculous by our current standards. He has numerous cameras (covering a wide spectrum of light) woven into his clothing (mostly his jacket), along with microphones, laser, terahertz, and radar systems, and scent sensors. These are for enhancing his information about his environment for safety reasons, for his augmented reality functions, and to help him paint an accurate simulated environment for people who visit him via virtual telepresence.
His clothes are touch sensitive, and are able to restrain his movement in a limited way for his AR functions using artificial muscles, perhaps using carbon nanotubes. His shirt and jacket are capable of displaying images, even full movies, and playing sounds. Yes, some people in the future will use this technology to obnoxious ends. There are always trade-offs.
In the story, most of Mike's sensors are in his jacket, so he wouldn't have to clean it as often. His pants would largely be used for their touch sensitivity for things like keysphere operations, and perhaps for haptic resistance, to simulate someone touching his legs.
Have I gone a little far with all of these capabilities? Did I not go far enough? Will everyone only have one set of clothes? How machine washable will augmented clothing be? Only time will tell.
For those without implants, there will have to be some way to enter information and control programs while moving around. These would likely be largely controlled by hands, and would preferably require minimal hardware for the purpose of mobility and convenience. No one wants to carry around a keyboard. In fact, I think that the best systems would require no hardware at all. Thus, the "non-planted" characters in Living Outside primarily use virtual mice and keyspheres.
I imagine that virtual keyboards will be popular first, due to familiarity. But the shape of a keyboard is not optimal. Imagine typing on a virtual keyboard, hovering in front of you, your fingers tracked by camera and haptics. Now imagine its keys wrapped around into a sphere shape. The first advantage this shape has is that it's more natural to manipulate than a board. Hands naturally face each other when brought up from the sides. To type on a keyboard, you have to pronate your hands. Holding a ball is more comfortable. A keysphere would also allow more space for additional controls than a flat setup.
Once experienced with a keysphere, a user could type anywhere. Hands would not have to be held opposite each other, but could drop to the user's sides and enter input on their legs. A semi-transparent keysphere with ghost hands representing the user's hand position could continue to appear as a guide for precision, at least for less experienced users. Keyspheres could be extremely customizable. Experienced users could add multiple layers of context, perhaps depending on the angle that the wrist is held, or some combination of keys that would toggle numerical mode or whatever.
There are several ways to track finger movement for any virtual control system. You could type anywhere a camera - visible or infrared (for the dark) - could see your hands, such as in front of the camera embedded in a pair of augmented reality glasses, or in view of any of the multiple cameras on Mike's clothing. You could also use touch sensitive haptic wear, such as most of Mike's outfit, but particularly his gloves which precisely track his hands and fingers. Haptic gloves could also provide feedback while handling the keysphere, creating the feel of a ball, providing a slight vibration, or maybe a small "catch" while gliding a finger over a key.
Hands could also be used as a kind of virtual mouse, allowing manipulation of augmented reality content in three dimensions. It's easy to imagine selecting a virtual object, by hand gesture or eye tracking, then moving it in three dimensions with small hand movements. Such control systems are in their infancy right now, but when mature, could effect intricate and intuitive controls not possible with any control system widely available now. Eventually, physical keyboards and mice might go out of fashion after the development of virtual systems. For one thing, it would be nice to be able to type with your fingers in any position, and virtual controls, liberated from hardware, would be more flexible than physical hardware. It would likely also help with repetitive stress injuries.
I don't know how popular wearing gloves all the time just for control purposes would be. The camera method would probably be more convenient. However, haptic gloves could also provide haptic resistance and tactile sensations like the roughness of a virtual object or its heat. Those are great features for virtual reality, but I don't know how much they would matter to people going to the grocery store. Also, let's just say that Mike's gloves are easily washable, so he can wash them in a sink for hygiene like he would wash his hands.
There are several methods of tracking eyes. Bionic contacts might be able to track where a person's eyes are looking. Cameras can already do this fairly well. Augmented glasses presumably use virtual retinal displays, which obviously have to know where your eyes are, so that's an easy one too. Check the Related Tech Links for a link about headphones that track the eyes based on the cornea's positive charge. Let's just say that Mike's ear inserts are tracking his eyes using this method. Bionic contacts might add even more charge, making it easier.
In Living Outside, "plant" can refer to any neural implant, but "the plant" refers to the system of neural implants which enable full immersion virtual reality. This system is designed to stimulate all of the physical senses: sight, sound, touch, pressure, pain, temperature, smell, taste, acceleration, balance, etc., as well as translating motor signals from the brain into proxy commands for virtual movement.
The simplest way I am aware of to accomplish this is to connect the brain's sensory and motor nerves with implants which would act as nerve signal routers. This would involve merging implants with the spinal cord and the 12 cranial nerves (most of which are connected to the conveniently located medula). The signals sent along these nerves are relatively straightforward to read and stimulate as compared to higher brain functions.
Implants for vision and hearing are pretty straightforward, and we have early versions of them today: cochlear implants and various visual prosphetics. Creating a full body virtual experience is much more difficult, but there are a multitude of benefits to engaging all of the senses.
Tactile feedback and motor controls are necessary for immersive virtual reality in that they allow: a fuller sense of presence; natural and free movement; sophistication and depth of social interactions; intuitive control schemes; and the ability to feel like you're really flying, eat food, and have sex. On top of these reasons, there's also wilder possibilities such as remapping your senses to inhabit different bodies or modifying your genitalia from one sex to another (or into a tentacle). Truly convincing a brain of inhabiting a virtual body requires discomfort and pain to add authenticity and spice. You also have to be able to inhibit the signals being sent to your brain from your body to avoid sensory confusion.
While haptic clothing could create some level of physical feedback, to truly and satisfactorily accomplish most of this requires a system of full immersion brain implants. It seems pretty implausible to me that forms of technology for reading and manipulating brain activity from outside the skull, such as eeg, fmri, or transcranial magnetic stimulation, could offer any meaningful virtual reality experience. Tech that could fit in a hat might be able to offer some useful thought-based control schemes, though. But nothing could compare with neural implants. When they become safe and economical, tactile and other sensory implants will inevitably become commonplace. This will happen for many reasons, but one will probably drive adoption more than any other- virtual sex.
Below is an idea for a tactile implant technology I find plausible. Who knows how the technology will evolve? Don't judge me harshly, the future!
The Plant: Implantation
For the sake of Living Outside, I imagine a system of implants attached to the main sensory nerves feeding into the brain, specifically the 12 cranial nerves and the spinal cord. I'll use the "spinal plant" as an example, but they would all work like this. The spinal cord contains ~20 million axons (I'm not sure how many for any given cross section) sending information signals from the body to the brain, and motor signals from the brain to the body. Input/output signals from the brain would be relatively straightforward to read and manipulate, compared to higher brain processing of the senses.
Ideally, the spinal plant would be injectable. It would be injected next to the spinal cord, somewhere below where it connects with the brain stem. The device would self-assemble and form a ring around a thin section of the spinal cord. It would then flood the spinal cord with millions of biohybrid nanobots (or other connecting elements) and nanoscale tendrils. These elements, much smaller than the diameter of an axon (1 micrometer or so), would establish connections with each of the spinal cord's axons, or with groups of axons. Doing this effectively without damaging the spinal cord will take some engineering, but it's plausible that we'll have this level of nanotechnology in the next 30 years.
Each of the axon/plant connections allows the spinal implant to control the flow of axonal "information" from and to the brain, by altering or suppressing each axon's frequency. In effect, such a plant would be a wireless router, determining the flow of sensory information and motor controls among the brain, the body, and simulated proxies.
Many of these connections could be disintegrated or neutralized once calibration determines that their physiological correlations are not useful for plant purposes. I imagine that implants will be designed to be permanent, although they would ideally have the ability to dissolve, and be absorbed by the body, in case of a problem or to make room for an upgrade.
Cyberizing the cranial nerve fibers, such as the olfactory or optic nerves, could be done in a similar way with smaller implants. Many of these conveniently connect to the medula, near the top of the spinal cord. Perhaps a system of implants of these nerves connecting to the spinal plant as a hub could work.
With self-assembling nano-electronics, implantation could become routine, affordable, and require no invasive brain surgery. I recognize how extreme and risky a spinal implant seems now, but with advances in technology I believe it will eventually carry only a small, acceptable risk considering its enormous potential benefits. Full immersion virtual reality will enable some pretty incredible things, as I hope I've demonstrated in Living Outside.
The Plant: Energy Needs
Most of the computation involved with the Plant would be done outside the skull. Tasks such as generating patterns of axonal stimulation to create sensations in the brain would be performed by outside computers. With efficient design, the plant would have small energy needs. Actually stimulating the spinal cord, for example, would require very little power. For obvious reasons, implant battery capacity will have to be limited. Some of its needs could be met by taking advantage of its environment- ambient body heat, kinetic energy, the piezoelectric effect, blood sugar, and maybe even the electrical impulses of the spinal axons themselves.
The spinal plant could be connected by wires to the other implants and to a physical terminal in the back of the neck, which could supply power and communication. To reduce the risk of infection, the terminal could be imbedded in the skull underneath the skin and interact wirelessly through the skin. As awesome as head terminals look, they don't seem like a a good idea to me, due to infection and risk of damage. They would work for robotic bodies like Ghost In The Shell's Major Kusanagi, of course.
However, wireless technology would be better. For Living Outside, Thomas' plant gets most of its power wirelessly by magnetic induction. By this point wireless bandwidth shouldn't be a problem for handling the human brain's sensory and motor bandwidth needs. I've seen 1 gigabyte a second given as a safe upper bound on the spinal cord's bandwidth, and that's probably much too high for all of the senses combined. That's not counting the potential extra bandwidth for enhanced cyber senses, of course. There's no reason to think that the brain couldn't work with higher resolution sight than provided by our wonderful yet flawed eyeballs.
The Plant: Security
For obvious safety reasons, the plant will contain only the most basic firmware necessary. There should be nothing there to hack, though people will try anyway. And they will occasionally succeed at hacking the surrounding systems, which will support and manage the plant, receiving its input and supplying it with stimulation. Which is why the plant will have to have some sort of kill switch for the safety and comfort of its user. It could be activated either by code or by special motor command of its user, a physical safe word. This action would have to be distinct, and could both stop the plant from operating and start it again. Let's say that T's special motor command is attempting to put his heels together and curl his toes inward.
For the safety and comfort of the Plant's user, there would have to be a limit put on pain, heat, pressure and other things that could cause discomfort. I'm not sure what portion of possible pain should be allowed. 10% of maximum in any given area? Let's just say enough to suck under the worst case, but not an unbearable amount.
As a bonus, you could turn off severe or annoying physical pain or discomfort. It's true that pain is necessary for the continued safety and health of the body. But how long do you have to suffer after you've stubbed your toe? Isn't 2 seconds enough punishment for not watching where you're going? And why should anyone have to deal with the pain of stomach cancer? Besides pain, you could change what food tastes like, or enhance sex (use your imagination or check out the Omni episode).
The Plant: Information Processing As It Relates To The 2nd Episode
Here's how the Plant worked with T's physical body during his time as an insubstantial sphere in the 2nd episode. While Thomas was a sphere, his physical body's nerves continually sent signals into and up his spinal cord, most of which were intercepted and muted by his spinal Plant. If T was not projecting to a proxy, but simply inhabiting his own body, the Plant would simply have let the axon signals go by unhindered.
T's Plant has complete control over his axonal signals. It can stop them entirely, record them, replay recorded signals, partially mute them, enhance them, replace them with simulated signals, or mix physical and "virtual" signals.
Most of the processing for this would be done outside of the head. For many purposes a plant user might simply need to be near a wireless network source. But for intense activity, where close proximity might be useful, I imagine them wearing thin, comfortable helmets; maybe even so thin as to embedded in a cloth skullcap or a hat. I imagine proxy users laying with their heads on cyber-enhanced pillows.
Users with full-sense proxies, like T's before he became a sphere, would mute their physical senses so they wouldn't be distracted by them. Full immersion proxies work by sending simulated senses to the brain via the Plant. The Plant stimulates the spinal axons in a way which corresponds with the intended sensation. The feeling of a particular virtual wood grain on a bare foot, for example. Stimulating all of a proxy's sensations together gives the brain the perceptions of a virtual body in just the same way that it is given perceptions of a physical body. With a good enough Plant, the user wouldn't be able to tell the difference.
Without tactile stimulation from a proxy, T would be able to faintly feel his body. His brain would rapidly habituate to the sensations coming in from his resting body, much like how people are usually unaware of most of their body. For example, your toes or scalp are unobtrusive most of the time. With the same regular stimulation, the conscious mind will simply ignore the body, just like the effects of a sensory deprivation tank.
Thomas, like any proxy user who had taken common sense precautions, could entirely cut off his physical sensations relatively safely. He has cameras in his apartment that would alert him of problems, and health sensors in his body that would detect significant dangers, so he wouldn't have to worry about his safety while he was projecting away from his physical body. The Plant would also monitor the body's muted signals to make sure nothing went wrong, or maybe even to move body parts if they hurt or fall asleep.
The brain, deprived of tactile sensation, will eventually start to hallucinate. Occasionally giving it full body pulses should give the brain enough to work with to fix that. A simpler solution would simply be to let the body's own faint signals do the same, which is what T does.
I imagine that plant users with penchants for lack of tactile sensation could develop cases of phantom limb syndrome, although that could be easily prevented by giving the brain some minor sensations to work with every once in a while. And curing it would be as simple as restoring limb sensation.
Related Tech Links: Proprioception
The Plant: Other Full Immersion Implant Ideas
There were three other implant technologies I considered for this story. One involved flooding the brain with nanobots. They would then attach to sensory and motor nerves, and essentially do the same thing that the spinal plant does, but in a decentralized way. This would be cool, eliminate some difficulties, and I don't know why it won't eventually be plausible. I think this could be a successor to the plant method described above, which supplies a centralized base for communication and power. I don't know how those issues would work out with a decentralized cloud of nanobots, and so I would feel weird using this idea since it seems too much like magic given my current ignorance.
The second idea is to implant a device in, or over, the somatosensory cortex- the area of the brain responsible, crudely speaking, for dealing with the senses. A motor cortex implant could be used to control proxy bodies. This is potentially much more complicated than a spinal plant. A spinal plant just deals with axons and their signal frequencies. Interfacing with the somatosensory cortex involves dealing with sense processing and more advanced brain functions. Again, there are some unanswered questions. How do you use this method to inhibit specific sensations of the body, for instance?
The spinal Plant concept is much easier to grasp right now, but I think there are some very interesting possibilities in a somatosensory Plant. For example, instead of co-opting the established brain-body connection through the spinal cord, you could create the sensation of an entirely separate body. Then you could have two bodies at once. One physical and one, or even more, virtual proxies. How would the brain deal with having full sensory information from both a virtual and a physical body at the same time? It seems like it would be confusing, but the brain is pretty amazing with this sort of thing.
The third idea is to severe the spinal cord and actually put an implant router between its two sections. I got this idea from Marshal Brain. Eventually, this might be the most effective way to interface with the spinal cord, but it carries a pretty high ick factor.
Finally, I'm not going to deal with other types of brain enhancements in this story, though the possibilities are intriguing. I look forward to in-brain memory enhancement, recording of emotions and thoughts, direct interfaces between the visual center and art programs, and a whole host of other cognitive enhancements. But for Living Outside I wanted to focus in on telepresence and virtual reality to show how those technologies alone could transform human existence. One way or another, we're going to approximate full immersion virtual reality as closely as possible, so the possibilities Living Outside presents should be reasonably applicable to the future of virtual and augmentive reality.
The Universal Remote Control, or URC, is what I call a mature version of the motor cortex implant which already exists. This implant connects thousands of electrodes (or some other sensors) to various groups of neurons in the motor cortex. When certain neurons, or neuron groups, fire, the electrodes connected to them are activated and create some sort of feedback mechanism, such as moving a game character to the left, so the brain can learn to associate firing certain neurons with specific external consequences. The coolest thing about this is that the brain naturally bridges the gap to communicate with the outside world.
Let's just say that Living Outside's URC is purely passive, it just detects neuronal signals. It would be cool, and more powerful, if these implants eventually provide feedback. But simply passively sensing brain activity is sufficient. Like other brain implants, it should be designed with no hackable elements. I don't think it needs to be activated by the kill switch, because it can't directly influence the brain, and stopping it would limit user options.
The more electrodes measuring neurons in the motor cortex, the better. Right now the BrainGate chip has less than a hundred electrodes, but there could be many thousands of connections in the future. With that many points of articulation, you could perhaps control an entire secondary body with the precision approaching that of your physical body. With practice, it apparently becomes quite natural as well. It might even be better in some ways. Stimulation of the URC by the motor cortex to clench a virtual fist is potentially faster than a signal sent through the spinal cord to a physical hand, because of the relatively slow speed nerve impulses travel through the body.
It would take a while to master, but once a user has mastered it in one context, it should be relatively easy to change contexts and create new mappings for the electrodes, similar to how typing on a keyboard gets different results depending on the program. The brain should be able to adapt to the controls for different programs, and to figure out that firing specific neurons produces different effects on the external world depending on context. Eventually, it should be easy and intuitive to switch between using an art program, using a keyboard, playing Hyper Grind, or "telepathically" communicating with other users. You could use it to launch and manage programs, change settings on the fly, instantly type messages at several times the speed possible with physical hands, control the motion of proxies, control virtual objects in 3D space with precision "telekinesis," control physical electronic equipment like TVs and cars, etc.
People with a URC, like Thomas, wouldn't have as much use for a keysphere. You don't get much better than a direct connection to the brain's motor cortex. T could probably use his URC to simulate hands to operate a keysphere fairly well, although that would be inefficient. T could also use his proxy hands, which are interpreted from motor signals sent down the spinal cord and intercepted by the plant, to operate a keysphere, but that would still be less efficient than commands through a URC. There are big sections of the brain devoted to controlling the hands though, so utilizing the brain's "hand functions" might make sense depending on what was going on.
As noted, a rudimentary form of this implant exists right now. BrainGate is an example.
The reader might feel that I have left out some important technologies from Living Outside, such as strong AI, advanced robotics, and bioengineering. Given the development of computing and brain implants in the first two episodes alone, you would expect other technologies to be present and accomplishing some incredible things. I'm hopeful that strong AI, for example, will be developed well before the end of the 21st century, and will have a serious impact on the world. But Living Outside is primarily about exploring the possibilities of VR and AR for personal fulfillment and societal transformation, so I'm "filtering out" other areas of technology to focus on those areas. Also, I have a much better feel for how VR will change society than how AI will.
So, what is the time frame for the technology in this story? Ray Kurzweil thinks that we'll have full immersion virtual reality by 2030. I would be surprised, given my imperfect understanding of current trends, if we didn't have it by 2050. But who the heck knows? Even giving time for society to adopt full immersion virtual reality and transform accordingly, I suspect this story could easily occur by 2070.
Predicting future technologies is notoriously difficult. No one truly envisioned the Internet, though some came close. No one, to my knowledge, posited that at some future time the teeming masses would carry small computers in their pockets which would enable them to instantly access the entire world's knowledge base, and allow them to communicate with people around the globe. As of 2013, there are a billion smart phones in the world. And who foresaw Youtube? A free service to which anyone can easily upload a video and have it potentially seen by hundreds of millions of people at no cost to themselves?
So why bother trying to imagine the future? Because there are developments that are foreseeable. The territory I cover in Living Outside requires advanced technologies that don't yet exist, but much of what I explore seems inevitable in one way or another. For example, take the Crain Slain episode. At some point, we will have interactive concerts with rhythm game components that will allow audience members to influence and become integrated into the concert. At the least, thinking about this stuff is fun.
What might a full-immersion, player created virtual world look like? First, some definitions. By “full immersion”, I mean that players could inhabit proxies within such a world, and experience that world as indistinguishable from physical reality, if such realism was desired. By “player created,” I mean that players create and shape that world how they want, with a large degree of freedom. Ataraxia is my attempt to imagine such a world.
Player Created Virtual Worlds are Going to be HUGE:
Video games today, such as World of Warcraft, feature vast virtual spaces, but offer limited social participation and immersion. These limitations restrict the development of participatory culture. When virtual reality allows for eye contact, facial expressions, nonverbal communication, touch, and even the ability to smell other people, virtual reality will be competitive with physical reality for socializing. And when people are given the tools they need to create worlds for themselves, virtual spaces will quickly dwarf the physical world. Just take a look at the videos below to see the world building already being accomplished in Minecraft. Keep in mind that the Minecraft players responsible are donating their time and effort for these projects.
In Living Outside, Ataraxia is inhabited at any given time by tens of millions of players from all over the world. Many people primarily live in that world, giving it their almost full time devotion. Its virtual land covers almost a million square miles, making it comparable in size to India, and is capable of expanding to fit player needs. While much of the land of Ataraxia is procedurally (randomly) generated, players are responsible for designing and implementing most details, from character models to the construction of cities. With enough effort, players can also alter the physics, graphics, and the combat rules of Ataraxia, enabling a profound degree of player directed evolution for that world.
A “norm” is what I call the set of properties of an area within Ataraxia- its physics, models, style, rules, and such. There are multiple “norms” in any iteration of Ataraxia, existing side by side in both peace and conflict. A fantasy setting may exist alongside a sci-fi setting, although divergent norms tend to be unstable, and so one will eventually become dominant and absorb, or destroy, the other. The energy that players put into a norm give it momentum and inertia, and this is what allows players to evolve the world and give it direction.
Ataraxia as a Way of Life:
Ataraxia was named by several of its founders for a popular Outside warrior ideal: a state of liberation from anxiety and unnecessary preoccupations, a sense of tranquility even in the midst of epic warfare and strife. Ataraxia is a world where people can devote themselves to great or small causes- can strive, fight, and die for their own vision of how existence should be. In Ataraxia, people can live the type of life they need to, with a boldness and honesty that the constraints of physical reality would never allow.
Ataraxia features a large scale participatory culture. People choose to live in Ataraxia, part or full time, and can leave at any moment. What would such a culture be like? We are most familiar with the so-called "mainstream culture," which is by necessity jury rigged and based on a lowest common-denominator that many people find deeply unsatisfying. An active participatory culture formed voluntarily by enthusiastic players, on the other hand, could have a thriving vitality and authenticity rarely seen in the world today.
There is no set goal in Ataraxia. Players there can interact with each other in most ways they can in physical reality. Many players live in Ataraxia purely for social reasons. They build cities and live in them, form friendships, and make collaborative art. They have a part to play in shaping the world by changing or sustaining norms.
As a fully immersive virtual world with no strict rules, combat and even war are regular events. While there are safer game worlds to build communities in, many players enjoy the excitement and danger of living in a world capable of drastic and often violent change. Many find that conflict and competition spur development of new ways of life, and keep things interesting. The possibility of having to go to war to protect a community which a player helped build would be very exciting for many people.
Most players are not there strictly for social or role playing reasons, and enjoy developing their character's combat abilities and participating in various levels of conflict. Some fight to help realize the goals of one of the many competing factions. Others fight for their own glory, or to wield power over the masses. Some love the sophisticated combat system. Others love the art of making weapons and devices, and customizing proxies. Some griefers simply desire to cause misery to as many people as possible. Others spend great effort protecting the good citizens of Ataraxia from senseless malice.
Physics and Proxies:
People create and control proxies in Ataraxia. Realistic immersive game worlds like Ataraxia, with a thorough simulation of physical reality as a base, require that people controlling proxies must either automate those proxies with ghosts, or control every aspect of that proxy’s motion by themselves. Unless a ghost is set up to automate a proxy, firing a gun, for example, requires loading it, readying it, aiming it, firing it, and dealing with its recoil, all while maintaining balance if you are standing, and numerous other factors. In other words, you would have to command the use of every muscle required to do the same action in physical reality. Conveniently, ghosts can automate most activity for proxies, allowing players to turn a proxy into a more traditional video game character if desired, although playing so indirectly would put a player at a disadvantage compared to those who directly inhabit their proxies and work more seamlessly with their integrated ghosts.
Power and Developing Your Character:
Competition for resources and cultural dominance in Ataraxia is robust. Players gain power and materials by traditional mmorpg (massively multiplayer online role playing game) means, such as by going on quests, grinding, participating in the in-game economy, but also by conquest, but also through social avenues. Joining and ascending to a position within a faction is a popular path. “Factions” of players share resources and hold territory, and if successful, can reshape the entire world.
Holding and developing territory in Ataraxia is the best way to gather power, and to establish dominance over a region. Changing a territory’s norm requires energy. The more extreme the change from surrounding norms, the more energy and work is required. With enough effort, a group could fortify a city to the extent that no outside force could conquer it, but other players would probably move around it, and there are usually more efficient uses of energy.
To protect against lazy griefers, gaining significant prominence and power requires dedication and work. If a player wants to wreck senseless havoc on Ataraxia, they will have to work at it.
The combat system of Ataraxia is capable of fully realistic physics, but varies from iteration to iteration, and from norm to norm. In one norm, hitting someone with a sword might cut their head off, killing them, while in others it might just drain their energy. A proxy developed in and manifesting the resonance of one norm might not be effective in another. For example, a dragon might not be able to fly in a mech based norm, though this depends on a number of factors.
Death and Return:
Death is easy to return from for most players, although there might be temporary penalties to their power level to prevent them from reentering a combat zone right away. Returning from death at higher power levels requires undergoing certain challenges in the “underworld,” although these get easier the longer the player has been dead. Certain powerful proxies manifesting a great deal of their potential, such as the Pandemoniums, find it extremely difficult or impossible to return from death. Most players can return from death without losing equipment or powers, but there are exceptions.
Cycles of Ataraxia:
The world of Ataraxia is cyclical, evolving with each iteration according to the momentum of its players. Each iteration, which may last months, or more typically years, ends when a new dominant paradigm is decided by its players, or if the situation has become intractably stagnant. Over its many incarnations, Ataraxia has been based around traditional fantasy adventure, space opera, horror, 20th century warfare, mecha, and various combinations of genres.
If an area of Ataraxia is too divergent from the mainstream norm, it may split off and become its own separate game world. The most notable example of this is CyFrenia, a mech based world that diverted when Ataraxia took a turn toward horror. The Faint faction in Ataraxia inhabits its own twilight dimension that overlaps the dominant norm, and at the time of Living Outside may be diverging into its own world.
Ataraxia is administered by “the Gears.” These highly respected players ascend to govern various aspects of Ataraxia and to keep it running. To do this, they must transcend allegiance to factions and their own self interest, or risk losing their reputation and thus their position. They operate as dungeon masters for the areas and aspects of Ataraxia that function like traditional games, creating and distributing quests and coordinating non-player proxies. But they also exist to maintain the core integrity of Ataraxia, and for this this purpose they select certain special players to become guardians of the higher principles of the game world. These players are called Pandemoniums.
The 9 Pandemoniums:
In each iteration of Ataraxia, there are 9 players who ascend to become Pandemoniums. Each of the 9 Pandemonium is granted special powers which represent the exceptional mastery of game mechanics which that player has demonstrated. A few examples of Pan powers include: supreme control of mecha, manipulation of information flow, and control over game physics. Pandemoniums can effectively exert their exceptional power in any norm, and are forces to be reckoned with. Their purpose is to maintain the stability of Ataraxia overall and to prevent "Hell scenarios," which are cycles of pointless violence that is potentially devastating to the game world.