New Artificial Nerve Works Pretty Much the Way Ours Do

A pretty nice advance” is what Robert Shepard, an organic electronics expert at Cornell, calls the development of a new artificial nerve that can sense touch, process information, and communicate with other nerves in much the same way that nerves in our own bodies do.

According to a recent report in Science, It could lead to dramatic improvements in how people with artificial limbs—and someday robots—sense and interact with their environments. It could also give future robots a greater ability to interact with their ever-changing environments—something vital for performing complex tasks, such as caring for the elderly.

Researchers led by chemist Zhenan Bao at Stanford have constructed the artificial nerve from flexible organic components that have three main parts.

First, a series of dozens of sensors pick up on pressure cues. Pressing on one of these sensors causes an increase in voltage between two electrodes. This change is then picked up by a second device called a ring oscillator, which converts voltage changes into a string of electrical pulses. These pulses, and those from other pressure sensor/ring oscillator combos, are fed into a third device called a synaptic transistor, which sends out a series of electrical pulses in patterns that match those produced by biological neurons.

Bao and her colleagues used their setup to detect the motion of a small rod moving in different directions across their pressure sensors, as well as identify Braille characters. What’s more, they managed to connect their artificial neuron to a biological counterpart. The researchers detached a leg from a cockroach and inserted an electrode from the artificial neuron to a neuron in the roach leg; signals coming from the artificial neuron caused muscles in the leg to contract.

Because organic electronics like this are inexpensive to make, the approach should allow scientists to integrate large numbers of artificial nerves that could pick up on multiple types of sensory information, Shepherd says. Such a system could provide far more sensory information to future prosthetics wearers, helping them better control their new appendages. It could also give future robots a greater ability to interact with their ever-changing environments—something vital for performing complex tasks, such as caring for the elderly.

Four Rights to Preserve Our 'Cognitive Liberty'

Cutting-edge neurodevices, such as sophisticated neuroimaging and brain-computer interfaces (BCI), enable to record, decode and modulate the neural correlates of mental processes. Research shows that the combination of neuroimaging technology and artificial intelligence allows us to “read” correlates of mental states including hidden intentions, visual experiences or even dreams with an increasing degree of accuracy and resolution.

While these advances have a great potential for research and medicine, they pose a fundamental ethical, legal and social challenge: determining whether, or under what conditions, it is legitimate to gain access to, or to interfere with another person’s neural activity.

We are facing a societal challenge: determining what rights individuals are entitled to exercise in relation to their mental dimension, write Roberto Andorno, a bioethicist at the University of Zurich, and Marcello Ienca at the Institute for Biomedical Ethics at the University of Basel. "This challenge might require the reconceptualization of existing human rights and even the creation of new neurospecific human rights." 

Here are four they propose: the right to cognitive liberty, the right to mental privacy, the right to mental integrity, and the right to psychological continuity.

Thought-Controlled Nanobots in Cockroaches

Researchers in Israel have come up with a new type of brain-machine interface that enables a human operator to control nano-meter-size robots in a cockroach by brain activity. They use "DNA origami robots" tethered to metal nanoparticles to trigger the delivery of a drug or "bioactive payload." They say their achievement has potential implications for the treatment of such difficult-to-treat disorders as schizophrenia, depression, and attention deficits.

Global Warming's Impact on Thinking

Even in auto factories, most presumably air-conditioned, a week of six days above 90 degrees reduces production by 8 percent. Perhaps more startling, rising temperatures seem to cause more violence.

“The relationship is really clear,” said Edward Miguel, an economist at the University of California, Berkeley, who has studied the issue. “Extremes in climate lead to more violence, more killing, more war, more land riots in Brazil, more sectarian violence in India. It’s pretty stunning how the relationship between climate and violence holds across the globe.”

The starting point is that heat makes people irritable. Researchers have found hot days linked to more angry honking in Arizona, and more road rage and car accidents in Spain. Scholars have done the math and found that on hot days a major-league baseball pitcher is more likely to retaliate for a perceived offense and deliberately hit a batter.

“High temperatures,” that study finds, are “lowering inhibitions against retaliation.”

On hot days, property crimes aren’t more common, but murders go up with the temperature. Likewise, researchers find that police officers are more likely to draw and fire their weapons during a training session conducted on a hot day. 

Robots that Kill on Their Own

Almost unnoticed outside defense circles, the Pentagon has put artificial intelligence at the center of its strategy to maintain the United States’ position as the world’s dominant military power. It is spending billions of dollars to develop what it calls autonomous and semiautonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race. The full story

New Robots Can See into the Future

Researchers at UC Berkeley have equipped robots with what they call "visual foresight," a form of robotic imagination. They can look only a few seconds into the future for now, but they're enough for the robot to figure out how to move objects around on a table without disturbing obstacles.

Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before.

Addressing the AI 'Trust' Crisis

As AI is reported and represented more and more in the media, it could contribute to a deeply divided society, split between those who benefit from AI and those who reject it, surmises a researcher at Oxford University

“Refusing to accept the advantages offered by AI could place a large group of people at a serious disadvantage,” writes Vyacheslav Polonski, warning of an impending “AI trust crisis.”

“We don’t need to understand the intricate inner workings of AI systems, but if people are given at least a bit of information about and control over how they are implemented, they will be more open to accepting AI into their lives,” he says.

Companies developing AI programs should be more transparent about how the “black-box” of machine learning algorithms works and involve people more in the AI decision-making process, he says, referencing a study I which people who are given the freedom to slightly modify an algorithm tend to be more satisfied with its decisions, more likely to believe it was superior, and more likely to use it in the future.

Giving AI Programs the Benefit of Self Doubt

AI programs can be made smarter by enabling them to know when they should doubt themselves. That’s the premise of modifications that researchers at Uber and Google are making to two popular deep-learning programs to handle probability.

The work reflects the realization that uncertainty is a key aspect of human reasoning and intelligence. Adding it to AI programs could make them smarter and less prone to blunders.

It’s really important, for example, for a self-driving car to know its level of uncertainty. Otherwise it can easily get into a situation in which it makes a fatal error, says Dustin Tran, a researcher who’s working on the problem at Google.

A deep-learning program that’s able to handle probability would be able to recognize objects with a reasonable degree of certainly from just  few examples rather than many thousands. Offering a measure of certainty rather than a yes-or-no answer should also help with engineering complex systems.

Mimicking the Brain with Neuromorphic Chips

Intel this week is demonstrating at the Consumer Electronics Show in Las Vegas a new device called a “neuromorphic chip,” which mimics in a simplified way the functioning of neurons and synapses in the brain. 

The best AI algorithms already use brain-like programs called simulated neural networks, which rely on parallel processing to recognize patterns in data—including objects in images and words in speech.

Neuromorphic chips take this idea further by etching the workings of neural networks into silicon. They are less flexible and powerful than the best general-purpose chips, but being specialized to their task makes them very energy efficient, and thus ideal for mobile devices, vehicles, and industrial equipment.

Codenamed Loihi, Intel’s new chip mimics how the brain functions by learning to operate based on various modes of feedback from the environment. Using the data to learn and make inferences, Loihi gets smarter over time and does not need to be trained in the traditional way.

“It takes a novel approach to computing via asynchronous spiking,” writes Michael Mayberry is corporate vice president and managing director of Intel Labs at Intel Corporation. “The brain’s neural networks relay information with pulses or spikes, modulate the synaptic strengths or weight of the interconnections based on timing of these spikes, and store these changes locally at the interconnections. Intelligent behaviors emerge from the cooperative and competitive interactions between multiple regions within the brain’s neural networks and its environment.”

A Technique for Reading Minds

A team from Kyoto University is using a deep neural network to read and interpret people’s thoughts. The new technique, dubbed “deep image reconstruction,” gives researchers the ability to decode images that have multiple layers of color and structure.

The reconstructed images from the study retain some resemblance to the original images viewed by participants, but mostly, they look like minimally-detailed blobs. However, the technology’s accuracy is only going to improve, and its applications will increase accordingly.

Imagine “instant art,” where you could produce art just by picturing it in your head. Or what if an AI could record your brain activity as you’re asleep and dreaming, then re-create your dreams in order to analyze them? 

Neurotechnology Patent Applications Are Soaring

Microsoft, IBM, and other IT titans are patenting brainwave-reading technologies they aim to employ in creating a technological ecosystem that works with our brains rather than against them, reports The Huffington Post. Their aim is to improve productivity. How great it would be, for example, to be able to block distracting notifications when we need to really focus

Our consumption behaviors are also being affected by neurotechnology. Consumer-research behemoth Nielsen has an entire neuroscience division dedicated to measuring consumers at their most fundamental level — their brain activity. Nielsen patents show neurotechnology applications incorporating multidisciplinary approaches with other cutting-edge technologies such as virtual reality systems, and even how these same research approaches could be used to improve home-based healthcare.

But neurotechnology isn’t just geared towards monitoring brain activity. Significant venture funding and patent activity is being directed at products that can enhance brain functioning through neurocognitive training and through cutting-edge (and not fully tested) magnetic and electrical stimulation devices. While ten of the twenty top neurotech IP holders are publicly traded companies, start-up Thync raised $13M dollars from Khosla Ventures and other investors in 2014 to commercialize transcranial stimulation as the “new coffee” for a mainstream audience.

Here's what's coming, says Zach Lynch, executive director of the Neurotechnology Industry Foundation

[Neurotechnology] is a disruptive force that will impact major parts of every industry, creating new opportunities for entrepreneurs, companies, and investors on the cutting edge to take leadership roles in upgrading entertainment, health, education, wellness and more.

China Could Achieve AI Supremacy in Five Years

Artificial intelligence (AI) is fast heating up as a key area of strategic competition, reports a think-tank-focused on security and defense, The Center for a New American Security. China is no longer a distant second in the AI arms race, but quickly catching up. According to the analysis, the US can expect its current lead to continue for another five years before vanishing.

U.S. leaders have signaled that AI is a major component of the Defense Department’s strategy to reinvigorate American military technological dominance. In October 2016, the U.S. government released a “National Artificial Intelligence Research and Development Strategic Plan,” one of three reports on AI issued by the Obama administration. Other nations have similarly taken note of the transformative potential of AI. In July 2017, China released its own national-level AI development plan. In September, Russian President Vladimir Putin observed, “whoever becomes the leader in this sphere [artificial intelligence] will become the ruler of the world.”

Home to many of the world’s top AI companies, China is poised to be a major player in this unfolding competition. In this in-depth analytic report, CNAS adjunct fellow Elsa Kania explores China’s strategy for developing and implementing AI technology for military applications. Drawing on open-source Chinese-language documents, Ms. Kania explains Chinese strategic thinking on AI and specific military applications that Chinese leaders envision. Her report is a must-read for national security professionals concerned about maintaining U.S. strategic advantage in an era of rapid technological change.

AI Development in China is Taking an Insidious Turn

Facial recognition is now used routinely in China for shopping and to access some public services, Science magazine reports.  

For example, at a growing number of Kentucky Fried Chicken restaurants in China, customers can authorize digital payment by facial scan. Baidu's facial recognition systems confirm passenger identity at certain airport security gates. Recent AI advances have made it possible to identify individuals not only in up-close still photos, but also in video—a far more complex scientific task.

China's attitude toward such advances contrasts with the U.S. response. When the U.S. Customs and Border Protection last May revealed plans to use facial matching to verify the identities of travelers on select flights leaving the United States, a public debate erupted. In an analysis, Jay Stanley of the American Civil Liberties Union in Washington, D.C., warned of the potential for "mission creep": With new AI technologies, "you can subject thousands of people an hour to face recognition when they're walking down the sidewalk without their knowledge, let alone permission or participation."

In China the government is already deploying facial recognition technology in Xinjiang, a Muslim-majority region in western China where tensions between ethnic groups erupted in deadly riots in 2009. Reporters from The Wall Street Journal who visited the region late last year found surveillance cameras installed every hundred meters or so in several cities, and they noted facial recognition checkpoints at gas stations, shopping centers, mosque entrances, and elsewhere.

"This is the kind of thing that makes people in the West have nightmares about AI and society," says Subbarao Kambhampati, president of the Association for the Advancement of Artificial Intelligence (AAAI) in Palo Alto and a computer scientist at Arizona State University in Tempe. In China, he says, "people are either not worried, or not able to have those kinds of conversations."