Category Archives: Artificial intelligence

What’s so exciting about AI? Conversations at the Nobel Week Dialogue

This article can also be found on the Huffington Post.

Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence”. The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although both challenges in developing AI and concerns about human-computer interaction were expressed, let’s in the celebratory spirit of the Nobel Prize focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AIs potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind”.

Future of AI at SciFoo 2015

This article can also be found on the Huffington Post.

Every year approximately 200 people meet at Google in Mountain View, California for an event called SciFoo, probably one of the most famous unconferences. Innovators from various disciplines are given access to Google’s cafeterias, to rooms with funky names such as neuralyzer, flux and capacitor and are left to organize sessions where they discuss freely, present bold ideas, give demos of gadgets etc. No topic is considered too crazy or taboo, and half-baked thoughts and ideas are encouraged rather than rebuked. The outcome is a glorious mess of ideas and inspiration that one needs weeks to digest afterwards.

One of the sessions at SciFoo this year, organized by Nick Bostrom, Gary Marcus, Jaan Tallin, Max Tegmark, and Murray Shanahan, discussed the future of artificial intelligence. Each of the organizers presented a 5 minute thought piece after which the floor was open for discussion. SciFoo operates under a “frieNDA” policy where people’s comments can only be reported with their permission – I’m grateful to the five speakers for consenting.

Murray Shanahan began by delineating the distinction between on one hand specialist AI (being developed with certainty in the short term, on a time frame of 5-10 years), and general AI (with a long time horizon, the full development of which for now pertains to the domain of science fiction visions). Then Shanahan raised three question-ideas:

  1. Do we want to build properly autonomous machines or do we want to ensure that they are just tools?
  2. If we could create a powerful AI that could give us anything we wanted, what would we get it to do?
  3. Should we create our own evolutionary successors?

While Murray Shanahan opened with philosophical idea-questions, taking as a given the development of general, strong AI, Gary Marcus adopted the position of the skeptic and focused on the issue of the imminence of strong AI. To the question of how soon will strong AI come, he expressed the opinion that there is still very little progress done on strong AI and that the focus is almost entirely concentrated on narrow AI.

Deep learning, the most promising avenue towards strong AI, is easily fooled, he felt, and doesn’t conceive of the world as we do. He exemplified by referring to the T-shirt he was wearing the previous day imprinted with a wavy pattern and having the inscription “Don’t worry killer robot, I am a starfish” – a mocking allusion to the fact that image recognition algorithms are still plagued by very basic mistakes, such as confusing wavy patterns with starfish. Therefore, at least 20 to 40 years to strong, general AI concluded Marcus. Even though concerned about strong AI, he didn’t think it would come soon, mainly because we are still missing a solution to a crucial problem: how to instantiate common sense in a machine.

Nick Bostrom opened his remarks by stating that it is hard to tell how far we are from human level AI. However, an interesting question according to him was: what happens next? Very likely we will get an intelligence explosion. This means that things that are compatible with the laws of physics but are currently part of science fiction could happen. So what can we do to increase the chance of beneficial outcomes? Bostrom felt that responders to this question usually belong to two camps: those who believe that this is not a special problem, therefore no special effort is needed and we will just solve this as we go along, and those who believe there is no point in trying because we cannot control it anyway. Bostrom, however, wanted to point out that there could instead be a third way of thinking about this: what if this is a difficult but solvable problem? he asked.
Jaan Tallinn talked about his personal history of increasing concern regarding the development of AI, from his first encounter with the writings of Elizer Yudkowsky to his involvement and support of organizations that attempt to steer the development of AI towards beneficial outcomes. Max Tegmark introduced one of these organizations supported by Tallinn, the Future of Life Institute which has steered the effort behind the open letter signed by more than 6000 people, among which top AI researchers and developers, an open letter underlining the importance of building AI that is robust and beneficial to humanity. The letter and accompanying research priorities document received financial support from Elon Musk, which enabled a grant program for AI safety research.

The presentations were followed by a lively general discussion. Below are some of the questions from the public and the remarks of the panel.

Do you think we can achieve AI without it having a physical body and emotions?

The panel remarked that intelligence is a multifaceted thing and that artificial intelligence is already ahead of us in some ways. A better way of thinking about intelligence is that it simply means that you are really good at accomplishing your goals.

Since cognition is embodied, for example opportunities for acquiring and using language depend on motor control, calculations depend on hands, is it possible to separate software from hardware in terms of cognition?

Robots have bodies, sensors, so to the extent that that matters, it is not an obstacle, it is merely a challenge. Embodyment is not a necessary condition for cognition. The fact that machines don’t have bodies won’t save us.

What do we do with strong AI? Why is its fate ours to choose?

At the end of the day you have to be a consequentialist and ask yourself: why are you involved in a project that randomizes the world? What is the range of futures ahead of you? Also, this question has different answers depending on what kind of AI you imagine building: one that is dead inside but can do amazing things for us, or something that is conscious and able to suffer.

Isn’t AI inevitable if we want to colonize the Universe?

Indeed when contemplating the kind of AI we want to develop, we have to think beyond the near future and the limits of our planet, we should also think about humanity’s cosmic endowment.

In order to design a system that is more moral, how do you decide what is moral?

We should not underestimate the whole ecosystem of values that might be vastly different than any human’s. We should also think not just about the initial set of moral values but also what we want to allow in terms of moral development.

We are already creating corporations that we feel have intentions and an independent existence. In fact we create many entities, social or technological that demonstrate volition, hostility, morality. So are we in a sense simply the microbiome of future AI (echoing another session at SciFoo that tackled the controversial question of whether we indeed have free will or are in large part controlled by our microbiome, our gut bacteria)?

The panel responded that one of the issues concerning us, the potential “microbiome” of future entities, is whether we are going to get a unipolar or a multipolar outcome (a single AI entity or a diverse ecosystem of AIs). The idea of the intelligence explosion coming out of a system that is able to improve itself seems to point towards a unipolar outcome. In the end it very much depends on the rate at which the AI will improve itself.

Another issue is the building of machines that not only do what we literally ask them to do but what we really want – the key to remaining a thriving microbiome. Some panelists felt this was a big challenge: could we really create AI that is not self-reflective? It seems like a lot would hinge upon aspects of the world that the AI could represent. Once an oracle machine (generally considered safe because this machine only answers questions like an oracle, it does not act upon the world) starts modeling the people who ask the questions its response would start covering manipulative answers as well. Indeed, in some sense our DNA has invented our brains to help reproduce itself better, but we found ways to circumvent that through birth-control for example (similarly we have found ways to hack our gut bacteria). So would our “microbiome-goals” be retained by entities smarter than us?

Finally another related question is what would the machines be able to learn. What kind of values and action schemas would be “innate” (pre-programmed) and what would the AI learn?

The session ended in a true SciFoo spirit with an honest recognition of our limited knowledge but also with a bold thought about the limitless possibilities for discovery and creativity:

Even in psychology we don’t know what general intelligence really means so in modeling cognitive processes in a sense we can’t even claim that we are either near or far from general AI.

To this thought from the public the panel remarked that even though the threshold of general or super intelligence might be deceiving in a sense, being fluid and ill defined, there is no issue in principle with creating general intelligence – after all our own brains are existence proof that you can have it.

Terminator robots and AI risk

This article can also be found on the Huffington Post.

Concerns about risk coming from the development of AI have been recently expressed by many computer science researchers, entrepreneurs and scientists, making us wonder: what are we fearing? What does this worrisome thing look like? An overwhelming number of attempts to explain the risk came in the media accompanied by pictures of terminator robots. But while the prevalent visual representation of AI risk has become the terminator robot, this is in fact very far from the most likely scenarios in which AI will manifest itself in the world. So, as we begin to face our fear, the face of what we’re told we should fear is utterly misleading. My fear is instead that, like with any representation that reveals some things and hides others, what the terminator robot reveals is simply something about our mind and its biases, while hiding the real dangers that we are facing.

 

The terminator robot becoming such a “catchy” representation is due, I believe, to the fact that our minds and the fears they dream up are embodied. We have evolved to fear moving atoms: tigers that could attack us, tornados that could ruin our shelters, waves that could drown us, human opponents that could harm us. Killer robots from the future are just a spinoff that cultural evolution has put onto our deeply rooted sources of fear that we’ve evolved to react to.

 

There is much research showing that the way we conceive of the world and the way we act or react in it are based on embodied representations. In their book “Metaphors we live by”, Lakoff and Johnson talk about how we represent even very abstract concepts in relation to our own physical bodies. For example, we think of happiness as being up, and sadness being down when we talk about events that “lift us up” and days when we feel “down”. These metaphors we use for representing abstractions in embodied ways are so deeply ingrained in our language that we don’t even think of them as figures of speech anymore. Equally so, our reactions are highly influenced by embodied representations. Several studies have found that when people are looking at drawings of eyes, they cheat less and behave more pro-socially than when they are not. Finally, the way we act in the world and the way we perceive our actions to be ethical or not depends on embodiment. Variations of the famous trolley problem (in which a person is asked whether it is morally right to sacrifice the life of one person in order to save the life of five by using this person as a trolley-stopper) have shown that people are more willing to say that it is ethical to do so when one needs to pull a lever that will cause the person to fall in front of the trolley, than when one needs to push the person oneself.

 

All of this suggests that the reason why killer robots “sell” is because we are wired to fear moving atoms, not moving bits of information. It’s almost like we need to give our fears an embodied anchor or it’s not scary anymore. But what is the price we pay for the sensation of fear that we need to nurture through embodied representations? I believe the price is blindness to the real danger.

 

The risk of AI is very likely not going to play out as armies of robots taking over the world, but in more subtle ways by AI taking our jobs, by controlling our financial markets, our power-plants, our weaponized drones, our media… Evolution has not equipped us to deal with such ghostly entities that don’t come in the form of steel skeletons with red shiny eyes but in the form of menacing arrangements of zeros and ones.

 

In spite of our lack of biological readiness to react to such threats, we have created societies that are more and more dependent on these elusive bits of information. We no longer live solely in a world of moving atoms but we also live in a world of moving bits. We’ve become not just our bodies but also our Facebook page, our Twitter account, our internet searches, our emails etc. We no longer own just gold coins or dollar bills, but we own numbers: credit card numbers, passport numbers, phone numbers. Our new digital world is quite different from the one that hosts our bodies and it is silly to think that what is worthy of fear in it will have the same characteristics as what’s worthy of fear in this one. And just because our emails and internet searches being stored and read by others does not feel as creepy as a pair of eyes always peering above our shoulder, it doesn’t mean that it really isn’t. Just because silent and stealthy taking over by AI does not give us the heebie-jeebies quite as much as roaring armies of terminators do, that doesn’t mean it is not equally dangerous or even more so.

 

So, even if we do not feel the fear, we need to understand it. We need to be fearfully mindful not of the terminator robots themselves, but of what they hide and misrepresent.

Why AI?

This article can also be found on the Huffington Post.

I have been perplexed lately by the media frenzy on the topic of artificial intelligence (AI) and all the inflammatory statements put forth about “deadly machines” and “robot uprisings.” Of course, this can partly be explained by the public’s general taste for frivolous alarmism and the media’s attempt to satisfy it. However, I feel that besides the question of “why this general reaction?”, there is another important question worth asking: “Why this particular topic? Why AI?”

Why is AI capturing so much of our attention and imagination? Why is it so hard to have a levelheaded discussion about it? Why is the middle ground so infertile for this topic?

I have come to believe that the reason is that AI engages some of our deepest existential hopes and fears and forces us to look at ourselves in novel, unsettling ways. Even though the ways in which we are forced to face our humanity are new, the issues and questions are old. We can trace them back to stories and myths that we’ve told for ages, to philosophical questions we’ve posed in various forms throughout the centuries, or to deeply rooted psychological mechanisms that we’ve slowly discovered. Here are four of the deeper existential questions that AI forces us to ask:

What if we get what we ask for but not what we really want?

Or in the words of Coldplay’s “Fix you,” “when you get what you want but not what you need,” what happens then? The ancients were no strangers to this question. The legend says that king Midas asked the gods to make it such that everything he touches turns to gold. So the king became rich but he also died of starvation, because the food he touched turned to gold as well. AI, more specifically human or super-human AI, is that tantalizing golden touch. Any programmer has at some point experienced an inkling of it, the great power of a program that computes what it would take you several lifetimes to do — but it’s the wrong computation! Yet it’s the right one because it’s exactly what you asked for, but not what you really wanted. Welcome to the birth of a computer bug!

Superhuman AI could of course magnify this experience and turn itself into our own buggy god that would give us tons of gold and no food. Why would it do that? AI researcher Stuart Russell likes to illustrate this through a simple example: imagine that you ask your artificially intelligent self-driving car to get you to the airport as fast as possible. In order to do so, the car will drive at maximum speed, accelerate and break abruptly… and the consequences could be lethal to you. In trying to optimize for time, the car will set all other parameters like speed, acceleration etc. to extreme values and possibly endanger your life. Now take that scenario and extend it to wishes like “make me rich”, “make me happy”, or “help me find love”…

What this thought experiment should make us realize is that we blissfully live in the unspecified. Our wishes, our hopes, and our values are barely small nodes of insight in the very complicated tapestry of reality. Our consciousness is rarely bothered with the myriad fine-tuned parameters that make our human experiences possible and desirable. But what happens when another actor like AI enters the stage, one that has the power to weave new destinies for us? How will we be able to ask for the right thing? How will we be able to specify it correctly? How will we know what we want, what we really want?

What if we encounter otherness?

The issue of not being able to specify what we want thoroughly enough is in part due to our limited mental resources and our inability to make predictions in an environment that has above a certain level of complexity. But why wouldn’t our super-human machines be able to do that for us? After all they will surpass our limitations and inabilities, no? They should figure out what we really want.

Maybe… but likely not. Super-human AI will likely be extremely different from us. It could in fact be our absolute otherness, an “other” so different from everything we know and understand that we’d find it monstrous. Zarathustra tells his disciples to embrace not the neighbor but the “farthest.” However, AI might be so much our “farthest” that it would be impossible to reach, or to touch, or to grasp. As psychologist and philosopher Joshua Greene points out, us humans, we have a common currency: our human experiences. We understand when someone says: “I’m happy” because we share a common evolutionary past with them, a similar body and neural architecture and more or less similar environments. But will we have any common currency with AI? I like it when Samantha explains to Theodore in the movie Her that interacting with him is like reading a book with spaces between words that are almost infinite, and it is in these spaces that she finds herself, not in the words. Of course, the real-world AI would evolve so fast that the space between it and humans would leave no room for a love story to ever be told.

What if we transcend and become immortal but transcendence is bleak and immortality dreary?

But what if, instead of being left behind, we will merge with the machines, transcend and become immortal just like AI advocate Ray Kurzweil optimistically envisions? Spending time with people who are working on creating or improving AI, I’ve realized that beyond the immediate short term incentives to build better voice recognition or better high-speed trading algorithms etc., many of these people hope to ultimately create something that will help them overcome death and biological limitations — they hope to eventually upload themselves in one form or another.

Transcendence and immortality have been the promise of all religions for ages. Through AI we now have the promise of a kind of transcendence and immortality that does not depend on a deity, but only on the power of our human minds to transfer our subjective experiences into silicon. But as long as hopes of transcendence and immortality have existed, tales of caution have also been told. I am particularly fond of one tale explored in the movie The Fountain. When the injured, dying knight has finally reached the Tree of Life, he ecstatically stabs its trunk and drinks from it, and happily sees his wounds heal. But soon the healed wounds explode in bouquets of flowers and he himself turns into a flower bush that will live forever through the cycle of life and regeneration. But that is of course not what the knight had hoped for… It’s interesting that the final scene of the movie Transcendence also ends with a close-up of a flower, reminiscent of Tristan and Isolde and their tragic transcendence through a rose that grows out of their tombs. Of course, there are less mythical ways in which transcendence and immortality through AI could go wrong. For example, neuroscientist Giulio Tononi warns that even though we might build simulations that act like us and think like us they will likely not be conscious — it wouldn’t feel like anything to be them. Heidegger saw in death a way to authenticity, so before we transcend it and become immortal, we might first want to figure out what is authentically us.

What if we finally fully know ourselves… and make ourselves obsolete?

Another promise from AI is exactly that: authentic knowledge about what we are. AI extends the promise that we could finally know ourselves thoroughly. A great part of AI research is based on brain simulation, so if we keep forging on we might actually figure out what every single neuron, every single synapse does; and then we will have the keys to our own consciousness, our own human experiences. We will finally be able to say a resounding “Yes!” to the imperative written on the gates of the temple of Delphi: “know thyself.” The catch is that, as my husband, physicist Max Tegmark, likes to point out, every time we’ve discovered something about ourselves we’ve also managed to replace it. When we figured out things about strength and muscle power, we replaced it with engines, and when we discovered more about computation, we invented computers and delegated that chore to them. When we will discover the code to our human intelligence, our consciousness and every human experience imaginable, will we replace that too? Is our human destiny to make ourselves obsolete once we’ve figured ourselves out? Creating AI is in some sense looking at our own reflection in a pond — just like Narcissus — without realizing that the pond is looking into us as well. And as we fall in love with what we see, might we also be about to drown?

Will we figure out who we are, what we want, how to relate to what we are not, and how to transcend properly? These are big questions that have been with us for ages and now we are challenged like never before to answer them. Humanity is heading fast to a point where leisurely pondering these questions will not be an option anymore. Before we proceed in our journey to changing our destiny forever, we should stop and think where we are going and what choices we are making. We should stop and think: why AI?