All posts by meiasmusings

Becoming a mother in the age of neurotoxins: Reflections from the International Meeting for Autism Research 2016

This article can also be found on the Huffington Post.

Becoming a parent, especially a mother, is one of the biggest responsibilities that one can assume in ones lifetime. Creating and nurturing a new life is an immensely complex task, which requires a lot of dedication, love and also skill and knowledge. Being a mother is hard even when the child is healthy and growing harmoniously, but the challenges of parenthood are multiplied so many times over when the child’s development goes awry, for example when the child has autism or other neurodevelopmental disorders.

About 1 in 68 children will develop autism, and even though we don’t yet fully understand what causes it, our knowledge about what puts someone at risk for autism is growing each day. For example, we now know that autism is a heritable disorder: a child who has a first degree relative (a parent or sibling) with autism has a 1 in 5 chance of developing autism as well. These children are considered to have a high risk for autism. But the risk for autism is not purely genetic: it is also environmental. There is a growing body of research showing that autism risk is elevated by certain environmental factors such as toxins. In fact, one of the overarching themes at this year’s International Meeting for Autism Research (IMFAR) was the issue of environmental causes of autism – what factors in our environment are responsible for elevated autism risk, and can they explain part of the recent rise in the number of children diagnosed with autism?

When thinking about “environment”, we usually conjure up images of the air we breathe, the water we drink, the soil we step on, the buildings we live in, etc. But much of the research on environmental risks for autism focuses instead on the pre-birth environment. In this case the environment is the mother, her body, the nutrients in her blood, the air in her lungs, the function of her various organs, etc. And even though traditionally, the mother’s womb has been praised as being one of the safest places on earth, recent research warns us of lurking dangers. For example the mother’s exposure to organophosphate pesticides at some point during pregnancy was associated with increased risk for autism for the child, with exposure during the third trimester of pregnancy doubling the autism risk. Air pollution has also been associated with increased risk for autism: mothers of autistic children autism were more likely to have lived in homes exposed to high traffic-related air pollution during their pregnancy. Also, mothers who had a metabolic condition such as diabetes, or who were obese during pregnancy, were more likely to have children with autism or developmental delay.

These findings are particularly worrisome given that sales of organic food (food free of pesticides) represents only 4% of the total U.S. food sales and it can be 30%-100% more expensive. More than 46.2 million people in the US live in an area with high levels of air pollution, more than a third of U.S. adults (78.6 million people) are obese, and 9.3% (29.1 million people) have diabetes.

This suggests that becoming a mother has become an even harder endeavor than before: it requires more knowledge and awareness about risk factors, more money to be able to afford avoiding them, and more discipline and a restrictive life-style. We have unfortunately created a world that is cruel to mothers-to-be, a world that places tremendous burdens on their shoulders for ensuring the health of their children. This is a world in which mothers have to worry about the most basic and automatic decisions humans make, such as eating, breathing or simply having a well functioning body. By putting neurotoxins in the food we eat, polluting the air that we breathe and generating societal trends of food consumption that lead to obesity and diabetes, we have created a world that restricts tremendously the lives of women who choose to responsibly become mothers.

Reducing the environmental risk of autism is not just a health issue – it’s also a gender equality issue about the basic rights and freedoms of women. There is an old saying: “It takes a village to raise a child.” We women should demand that our extended village, our society and world, takes better care of our children and of us by providing us with food, air and water that won’t put our children at risk for autism or other neurodevelopmental disorders.

What’s so exciting about AI? Conversations at the Nobel Week Dialogue

This article can also be found on the Huffington Post.

Each year, the Nobel Prize brings together some of the brightest minds to celebrate accomplishments and people that have changed life on Earth for the better. Through a suite of events ranging from lectures and panel discussions to art exhibits, concerts and glamorous banquets, the Nobel Prize doesn’t just celebrate accomplishments, but also celebrates work in progress: open questions and new, tantalizing opportunities for research and innovation.

This year, the topic of the Nobel Week Dialogue was “The Future of Intelligence”. The conference gathered some of the leading researchers and innovators in Artificial Intelligence and generated discussions on topics such as these: What is intelligence? Is the digital age changing us? Should we fear or welcome the Singularity? How will AI change the World?

Although both challenges in developing AI and concerns about human-computer interaction were expressed, let’s in the celebratory spirit of the Nobel Prize focus on the future possibilities of AI that were deemed most exciting and worth celebrating by some of the leaders in the field. Michael Levitt, the 2013 Nobel Laureate in Chemistry, expressed excitement regarding the potential of AI to simulate and model very complex phenomena. His work on developing multiscale models for complex chemical systems, for which he received the Nobel Prize, stands as testimony to the great power of modeling, more of which could be unleashed through further development of AI.

Harry Shum, the executive Vice President of Microsoft’s Technology and Research group, was excited about the creation of a machine alter-ego, with which humans could comfortably share data and preferences, and which would intelligently use this to help us accomplish our goals and improve our lives. His vision was that of a symbiotic relationship between human and artificial intelligence where information could be fully, fluidly and seamlessly shared between the “natural” ego and the “artificial” alter ego resulting in intelligence enhancement.

Barbara Grosz, professor at Harvard University, felt that a very exciting role for AI would be that of providing advice and expertise, and through it enhance people’s abilities in governance. Also, by applying artificial intelligence to information sharing, team-making could be perfected and enhanced. A concrete example would be that of creating efficient medical teams by moving past a schizophrenic healthcare system where experts often do not receive relevant information from each other. AI could instead serve as a bridge, as a curator of information by (intelligently) deciding what information to share with whom and when for the maximum benefit of the patient.

Stuart Russell, professor at UC Berkeley, highlighted AIs potential to collect and synthesize information and expressed excitement about ingenious applications of this potential. His vision was that of building “consensus systems” – systems that would collect and synthesize information and create consensus on what is known, unknown, certain and uncertain in a specific field. This, he thought, could be applied not just to the functioning of nature and life in the biological sense, but also to history. Having a “consensus history”, a history that we would all agree upon, could help humanity learn from its mistakes and maybe even build a more-goal directed view of our future.

As regards the future of AI, there is much to celebrate and be excited about, but much work remains to be done to ensure that, in the words of Alfred Nobel, AI will “confer the greatest benefit to mankind”.

What you should really be scared of on Halloween: A horror story

This article can also be found on the Huffington Post.

It was four days before Halloween and the spirits were tense, both those above and those lurking in the waters below. There was agitation and busy preparation everywhere, and a sense of gloom and doom was weighing heavily on everyone’s minds. Deep in the waters the heat was rising, and the lost ones were finding no rest. Provoked by the world above, they were ready to unleash their curse. Had the time come for the world as they knew it to end?

It was indeed four days before Halloween: October 27, 1962. The spirits were tense, both those above, in the eleven US Navy destroyers and the aircraft carrier USS Randolph, and those lurking down in the waters below in the nuclear-armed Soviet submarine B-59. There was agitation and busy preparation everywhere due to the Cuban Missile Crisis, and a sense of gloom and doom was weighing heavily on everyone’s minds. Deep in the waters the heat rose past 45ºC (113ºF) as the submarine’s batteries were running out and the air-conditioning had stopped. On the verge of carbon dioxide poisoning, many crew members fainted. The crew was feeling lost and unsettled, as there had been no contact with Moscow for days and they didn’t know whether World War III had already begun. Then the Americans started dropping small depth charges at them. “We thought – that’s it – the end”, crewmember V.P. Orlov recalled. “It felt like you were sitting in a metal barrel, which somebody is constantly blasting with a sledgehammer.”

The world above was blissfully unaware that Captain Savitski decided to launch the nuclear torpedo. Valentin Grigorievich, the torpedo officer, exclaimed: “We will die, but we will sink them all – we will not disgrace our Navy!” In those brief moments it looked like the time may have come for the world as was known to end, creating more ghosts than Halloween had ever known.

Luckily for us, the decision to launch had to be authorized by three officers on board, and one of them, Vasili Arkhipov, said no. The chilling thought of how close we humans were to destroying everything we cherish makes this the scariest Halloween story. Like a really good Halloween story, this one has not a happy ending, but a suspenseful one in which we’ve only barely avoided the curse, and the danger remains with us. And like the very best Halloween stories, this one grew ever scarier over the years, as scientists came to realize that a dark smoky Halloween cloud might enshroud Earth for ten straight Halloweens, causing a decade-long nuclear winter producing not millions but billions of ghosts.

Right now, we humans have over 15,000 nuclear weapons, most of which are over a hundred times more powerful than those that obliterated Hiroshima and Nagasaki. Many of these weapons are kept on hair-trigger alert, ready to launch within minutes, increasing the risk of World War III starting by accident just as on that fateful Halloween 53 years ago. As more Halloweens pass, we accumulate more harrowing close calls, more near-encounters with the ghosts.

This Halloween you might want to get spooked by watching an explosion, read about the blood-curdling nuclear war close calls we’ve had in the past decades, and then hopefully you will do something to keep the curse away, in the hope that one Halloween we’ll be able to say: nuclear war – nevermore.

The land of elves and midnight sun

After much deliberation, my sister and I arrived at fair and just resolution to our problem: where should we meet? Who should visit whom? Who should do the traveling? The obvious compromise: meet where the tectonic plates of our respective continents meet, in Iceland. So in May, 2014 we both traveled to the end of our tectonic plate to the land of elves and midnight sun.

The journey was not without peril, though. Our innocent souls did not see the imminent danger when we decided to feed the enemy peanuts, but we managed to survive the attack of the ferocious Icelandic beast – the horse – and lived to see the rage of the underworld: Strokkur erupting.

At Gulfoss, over the rainbow, our dreams came true: to be together again. We took the untraveled path and in the midst of the Icelandinc wilderness, on a steep cliff above the deceivingly calm waters of Hvítá, we pulled out our iPhones and listened together to Anastasia songs again: “at the beginning with yooooooooouuuuu…”.

At Þingvellir we tossed a coin in the crack between our two tectonic plates and wished to be reunited. Unlike Icelanders who know when to sleep and when to wake up (even though there is nothing obvious about when that should be), we got lured out of our beds by the midnight sun deep into the world of hot springs and fuming lands. At 2 am, hiking alone in the middle of nowhere, on a landscape that looked like a piece of another reality, we realized the meaning of true magic: having the most amazing sister in the Universe.

Future of AI at SciFoo 2015

This article can also be found on the Huffington Post.

Every year approximately 200 people meet at Google in Mountain View, California for an event called SciFoo, probably one of the most famous unconferences. Innovators from various disciplines are given access to Google’s cafeterias, to rooms with funky names such as neuralyzer, flux and capacitor and are left to organize sessions where they discuss freely, present bold ideas, give demos of gadgets etc. No topic is considered too crazy or taboo, and half-baked thoughts and ideas are encouraged rather than rebuked. The outcome is a glorious mess of ideas and inspiration that one needs weeks to digest afterwards.

One of the sessions at SciFoo this year, organized by Nick Bostrom, Gary Marcus, Jaan Tallin, Max Tegmark, and Murray Shanahan, discussed the future of artificial intelligence. Each of the organizers presented a 5 minute thought piece after which the floor was open for discussion. SciFoo operates under a “frieNDA” policy where people’s comments can only be reported with their permission – I’m grateful to the five speakers for consenting.

Murray Shanahan began by delineating the distinction between on one hand specialist AI (being developed with certainty in the short term, on a time frame of 5-10 years), and general AI (with a long time horizon, the full development of which for now pertains to the domain of science fiction visions). Then Shanahan raised three question-ideas:

  1. Do we want to build properly autonomous machines or do we want to ensure that they are just tools?
  2. If we could create a powerful AI that could give us anything we wanted, what would we get it to do?
  3. Should we create our own evolutionary successors?

While Murray Shanahan opened with philosophical idea-questions, taking as a given the development of general, strong AI, Gary Marcus adopted the position of the skeptic and focused on the issue of the imminence of strong AI. To the question of how soon will strong AI come, he expressed the opinion that there is still very little progress done on strong AI and that the focus is almost entirely concentrated on narrow AI.

Deep learning, the most promising avenue towards strong AI, is easily fooled, he felt, and doesn’t conceive of the world as we do. He exemplified by referring to the T-shirt he was wearing the previous day imprinted with a wavy pattern and having the inscription “Don’t worry killer robot, I am a starfish” – a mocking allusion to the fact that image recognition algorithms are still plagued by very basic mistakes, such as confusing wavy patterns with starfish. Therefore, at least 20 to 40 years to strong, general AI concluded Marcus. Even though concerned about strong AI, he didn’t think it would come soon, mainly because we are still missing a solution to a crucial problem: how to instantiate common sense in a machine.

Nick Bostrom opened his remarks by stating that it is hard to tell how far we are from human level AI. However, an interesting question according to him was: what happens next? Very likely we will get an intelligence explosion. This means that things that are compatible with the laws of physics but are currently part of science fiction could happen. So what can we do to increase the chance of beneficial outcomes? Bostrom felt that responders to this question usually belong to two camps: those who believe that this is not a special problem, therefore no special effort is needed and we will just solve this as we go along, and those who believe there is no point in trying because we cannot control it anyway. Bostrom, however, wanted to point out that there could instead be a third way of thinking about this: what if this is a difficult but solvable problem? he asked.
Jaan Tallinn talked about his personal history of increasing concern regarding the development of AI, from his first encounter with the writings of Elizer Yudkowsky to his involvement and support of organizations that attempt to steer the development of AI towards beneficial outcomes. Max Tegmark introduced one of these organizations supported by Tallinn, the Future of Life Institute which has steered the effort behind the open letter signed by more than 6000 people, among which top AI researchers and developers, an open letter underlining the importance of building AI that is robust and beneficial to humanity. The letter and accompanying research priorities document received financial support from Elon Musk, which enabled a grant program for AI safety research.

The presentations were followed by a lively general discussion. Below are some of the questions from the public and the remarks of the panel.

Do you think we can achieve AI without it having a physical body and emotions?

The panel remarked that intelligence is a multifaceted thing and that artificial intelligence is already ahead of us in some ways. A better way of thinking about intelligence is that it simply means that you are really good at accomplishing your goals.

Since cognition is embodied, for example opportunities for acquiring and using language depend on motor control, calculations depend on hands, is it possible to separate software from hardware in terms of cognition?

Robots have bodies, sensors, so to the extent that that matters, it is not an obstacle, it is merely a challenge. Embodyment is not a necessary condition for cognition. The fact that machines don’t have bodies won’t save us.

What do we do with strong AI? Why is its fate ours to choose?

At the end of the day you have to be a consequentialist and ask yourself: why are you involved in a project that randomizes the world? What is the range of futures ahead of you? Also, this question has different answers depending on what kind of AI you imagine building: one that is dead inside but can do amazing things for us, or something that is conscious and able to suffer.

Isn’t AI inevitable if we want to colonize the Universe?

Indeed when contemplating the kind of AI we want to develop, we have to think beyond the near future and the limits of our planet, we should also think about humanity’s cosmic endowment.

In order to design a system that is more moral, how do you decide what is moral?

We should not underestimate the whole ecosystem of values that might be vastly different than any human’s. We should also think not just about the initial set of moral values but also what we want to allow in terms of moral development.

We are already creating corporations that we feel have intentions and an independent existence. In fact we create many entities, social or technological that demonstrate volition, hostility, morality. So are we in a sense simply the microbiome of future AI (echoing another session at SciFoo that tackled the controversial question of whether we indeed have free will or are in large part controlled by our microbiome, our gut bacteria)?

The panel responded that one of the issues concerning us, the potential “microbiome” of future entities, is whether we are going to get a unipolar or a multipolar outcome (a single AI entity or a diverse ecosystem of AIs). The idea of the intelligence explosion coming out of a system that is able to improve itself seems to point towards a unipolar outcome. In the end it very much depends on the rate at which the AI will improve itself.

Another issue is the building of machines that not only do what we literally ask them to do but what we really want – the key to remaining a thriving microbiome. Some panelists felt this was a big challenge: could we really create AI that is not self-reflective? It seems like a lot would hinge upon aspects of the world that the AI could represent. Once an oracle machine (generally considered safe because this machine only answers questions like an oracle, it does not act upon the world) starts modeling the people who ask the questions its response would start covering manipulative answers as well. Indeed, in some sense our DNA has invented our brains to help reproduce itself better, but we found ways to circumvent that through birth-control for example (similarly we have found ways to hack our gut bacteria). So would our “microbiome-goals” be retained by entities smarter than us?

Finally another related question is what would the machines be able to learn. What kind of values and action schemas would be “innate” (pre-programmed) and what would the AI learn?

The session ended in a true SciFoo spirit with an honest recognition of our limited knowledge but also with a bold thought about the limitless possibilities for discovery and creativity:

Even in psychology we don’t know what general intelligence really means so in modeling cognitive processes in a sense we can’t even claim that we are either near or far from general AI.

To this thought from the public the panel remarked that even though the threshold of general or super intelligence might be deceiving in a sense, being fluid and ill defined, there is no issue in principle with creating general intelligence – after all our own brains are existence proof that you can have it.

Happy stepmother’s day!

This article can also be found on the Huffington Post.

On mother’s day we remember and celebrate motherhood, and there are plenty of things that come to mind when thinking of what being a mother means. However, a related, but much harder to concept to think of is that of step-motherhood. The scarcity of images that come to mind in relation to the step-mother role (with the exception perhaps of outdated stereotypes portrayed by Cinderella and Snow White’s evil step-moms) is quite at odds with the fact that stepmothers in the western world are no longer an exception to the family model. In fact, 40% of married couples with children in the US are stepcouples, and 12% of US adult women in the US are stepmothers – about 14 million total. So, who are these women? What does it mean to be a stepmother? And by whom (and when) should stepmoms be celebrated? These are the questions I find myself thinking about on mother’s day this year.

In terms of who these women are, I should first say that I am one of them. I am the stepmother of two wonderful teenage boys and I also have a step-mother myself. So that is two of us, two faces that I personally can easily put to this somewhat elusive notion. But beyond the numbers mentioned above, there are many faces and many stories. In fact, we stepmothers are a very diverse bunch. I remember the days when I was voraciously reading blogs of stepmothers on the internet preparing to meet my future step-children. Some of those stories I recognized myself and my own emotional propensities in, others were very foreign to me and my own emotional sensibilities.

So what makes us one group? What is this stepmother role about? I wanted to remain somewhat objective writing this post, not making it about my own idiosyncratic experience, so in answering that question I first tried to evoke the prevalent cultural stereotype related to step-motherhood and described it. But I failed, realizing to my surprise that culture and society offered little help with understanding stepmotherhood. To check whether it was simply my own memory or imagination that was lacking, I tried the same for motherhood and it was so easy: the person who gave you life, unconditional love etc.- all these images came to mind easily and rang so true, but when it came to stepmothers, the only two pop culture images that I could easily think of were Cinderella and Snow White’s evil step-moms, and the image of an overwhelmed Julia Roberts who was struggling desperately to gain the acceptance of her step-children. But none of those images felt particularly representative of my knowledge of flesh-and-blood stepmothers. My own stepmother, for example, is a wonderfully kind and very well put together person.

Prevalent pop-culture images had so little to offer, so I still had no answer to this question. I therefore went back to personal introspection. What did I have to say about my own role as a stepmother? The first thing that came to mind was the memory of a hot summer evening in NYC (around the time when my relationship with my now husband had started to become more serious), walking and talking with my sister and formulating this question: what positive thing would I be able to bring into the lives of these two kids (my now stepkids)? Having been a stepchild myself, this was a very important question for me to answer, and I felt strongly that my relationship and my future depended on the answer. Since then, I’ve tried to build my role as a stepmother out of answers to that question, seeking ways of being positively present in my stepchildren’s lives. That has at times made me the adult that was willing to watch Spongebob for hours, the person who makes new clothes magically appear in my stepkids’ closets, the one who picks them up from school with a paper-bag with one chocolate-covered marshmallow and a small stash of of Magic cards, etc.

This simple question has given me great opportunities over the years to be there for my step-children in a multitude of ways in which I felt they needed me or were able to enjoy my company or help, and in a sense I feel I’ve built my own role from scratch that way. But I’ve often felt that this fluid role was also very fragile and precarious and many times I’d feel an uncomfortable emotional sting caused by it’s fuzziness. There have been many times when I’ve been afflicted with questions such as: Do my stepchildren really need me? Am I really making a difference for the better? Am I adding anything of value to their lives? Am I everything a stepmother should be?

I am also a psychologist in training, and I know how important roles are for one’s identity formation, so in trying to analyze my own roles, I wondered what psychology has to say about stepmotherhood? One thing that I’ve discovered is that research has confirmed my own observations and feelings, and ambiguity seems to be a common attribute related to stepmotherhood. While mothers are aided by biology (think of all the hormones that help them bond with their children and forge that relationship) and also by culture (motherhood is elaborately defined and celebrated in all cultures), step-mothers have almost nothing to guide them in adopting this essential role. Moreover, their role and its nature hinges on a whole range of factors that they have very little control over. For example, Weaver and Coleman identified 6 factors that substantially determine the nature of the stepmothers role, factors that are often beyond the control of the stepmother herself: biological mothers, spouses, stepchildren, their own biological children, extended kin and experiences external to the family. In sum, ambiguity, lack of guidance (and role models) and lack of control seem to be the big challenges that stepmothers face.

I do believe, however, that these challenges are not insurmountable. I’m very much an optimist and feel that perhaps in all these challenges lie great opportunities:

1) Ambiguity gives us the opportunity to create complex, multi-faceted and original roles for ourselves that are not restricted to outdated stereotypes.

2) The lack of guidance means that our fairytale is still unwritten and that we have the opportunity to write it ourselves.

3) The lack of control is simply the illusion given by the richness of our lives – in fact we have the opportunity of exerting more control over how and who we are in relation to our step-children precisely because our options are more diverse.

To me these opportunities are worth celebrating. I feel that the celebration of stepmotherhood in our cultures should start with us the stepmoms – we are the ones who should celebrate our own roles and our own identities. I also think that it is great to celebrate on mother’s day, not because we are trying to steal the day away from the biological mothers, but because of the connection we have to those that share their children with us, the children that make it possible for us to experience and further define stepmotherhood.

Questioning authority in The School of Athens

This article can also be found on the Huffington Post.

On a recent trip to Rome, I was walking through the halls of the Vatican museums and after entering one of the Raphael rooms, I turned my head to discover, to my amazement, my favorite fresco: The School of Athens. I had somehow forgotten that this image resided inside the walls of the Vatican, so the unexpected encounter with it was the most delightful surprise.

The School of Athens has been one of my favorite artworks for a while, partly because of all the nostalgic memories of entire afternoons spent in my college library reading the works of the philosophers depicted in it. I had seen photos of it on computer screens and poster prints, but facing it in its grand, original enormity felt like I was there too, among my philosophy heroes.

However, it is not just the nostalgic memories of my own intellectual journey that make me love this painting. I love it also because I feel it represents the most exciting feature of the human spirit: the passion for knowledge, for figuring out the world. The other three walls of the room are also covered in gorgeous frescoes, representing theology, art and law, but to me the School of Athens is the grandest and most inspiring of them. This image represents the giants on the shoulders of which we climbed to reach for knowledge that has far surpassed their wildest imaginations. Sometimes those shoulders were flimsy and unstable, like Aristotle’s theory of motion, but the spirit of the School of Athens, the invitation to ask big questions, the public place to propose tentative answers and have them bolstered or refuted – that lives on and is still the engine behind any true quest for knowledge.

I spent a long time taking in the details of the painting, trying to recognize in the postures of the characters the spirit of different schools of thought or the personalities of different philosophers – a joyful exercise. The one charter that struck me the most was the poorly clad old man laying nonchalantly on the steps of the school, blocking the way of Plato and Aristotle who are approaching from the background. Rumor has it that this is Diogenes of Sinope, the “father of cynicism”.

On the plane back from Rome, I went back to the sources, the Lives of Eminent Philosophers (by another Diogenes, Diogenes Laertius) to read up on him and refresh my memories. The account portrays Diogenes of Sinope as a frustrating individual, devoted to a life of boastful simplicity (he was a beggar who slept in a giant ceramic jar in the marketplace) and unhindered defiance of social norms (when at a feast, certain people threw bones at him the way they would to a dog, Diogenes played a dog trick and urinated on them). He had a sharp intellect and an even sharper tongue that he used to taunt all authority figures of his time. He often disrupted Plato’s lectures and when Alexander the Great, leaning over him, offered to give him anything he wanted, Diogenes famously retorted “Stand out of my light.”. The collection of anecdotes about Diogenes of Sinope is a copious read, at times funny, at times outrageous, at times profound, and the central character, Diogenes himself, is a pain – an uncomfortable pebble in the shoe of all who have taken on an intellectual journey through the land of big questions.

The sentence that touched me the most in the entire piece on Diogenes was this: “Still he was loved by the Athenians.” The idea of this sentence is, I feel, beautifully reflected in Raphael’s fresco. I love first of all that Raphael chose to depict Diogenes, and I love how he depicted him. I love the fact that Diogenes occupies a central place in the image. I see this as a metaphor for a profound idea – that in the space of knowledge seeking, the uncomfortable questioning of authority has to take a central place; it has to be loved just like the Athenians loved Diogenes. An important part of seeking the truth is doubting, questioning, challenging. If the path you are on is never crossed by the authority-questioning spirit of Diogenes, it is not a path towards knowledge, and if you do not let yourself be crossed by him, you are not a knowledge seeker.

I also like a few other things about the way in which Diogenes is depicted. I like that he is the only one who has a split audience. While other characters are enwrapped in solitary contemplation, like the front left character writing at the table (thought to be Heraclitus), or demonstrating to a group of awe-struck followers like the front right character drawing with a compass (thought to be Euclid), Diogenes’ audience is torn. The man in blue points to the approaching intellectual authorities, Plato and Aristotle, while the man in green makes an exasperated gesture to acknowledge Diogenes. Questioning authority gives us options, alternatives, it will make us feel torn between competing theories. I also liked that Diogenes does not entirely block the path of Plato and Aristotle but he does force them into a slight detour, just like questioning authority does not stifle the quest for knowledge, but makes us seek new and better routes. Finally, I like the fact that Diogenes is defiant yet vulnerable, exposing his bare chest in a non-aggressive posture. This is just like questioning authority, challenging and doubting should be. The role of doubts is not to stay with us forever but to expose themselves to careful examination and to force us to take a closer look at what we think we know.

On the first day of teaching after spring break, I showed my students the photos I had taken of the painting, introduced them to Diogenes and encouraged them to question authority – my authority as a teacher, the authority of their textbooks and the hardest authority to question of all – that of their own entrenched assumptions about the world. I asked them to embrace the spirit of the annoying Diogenes and to love him just like the Athenians did, because the School of Athens would not be the powerful, inspiring place that it is without a Diogenes of Sinope.

Terminator robots and AI risk

This article can also be found on the Huffington Post.

Concerns about risk coming from the development of AI have been recently expressed by many computer science researchers, entrepreneurs and scientists, making us wonder: what are we fearing? What does this worrisome thing look like? An overwhelming number of attempts to explain the risk came in the media accompanied by pictures of terminator robots. But while the prevalent visual representation of AI risk has become the terminator robot, this is in fact very far from the most likely scenarios in which AI will manifest itself in the world. So, as we begin to face our fear, the face of what we’re told we should fear is utterly misleading. My fear is instead that, like with any representation that reveals some things and hides others, what the terminator robot reveals is simply something about our mind and its biases, while hiding the real dangers that we are facing.

 

The terminator robot becoming such a “catchy” representation is due, I believe, to the fact that our minds and the fears they dream up are embodied. We have evolved to fear moving atoms: tigers that could attack us, tornados that could ruin our shelters, waves that could drown us, human opponents that could harm us. Killer robots from the future are just a spinoff that cultural evolution has put onto our deeply rooted sources of fear that we’ve evolved to react to.

 

There is much research showing that the way we conceive of the world and the way we act or react in it are based on embodied representations. In their book “Metaphors we live by”, Lakoff and Johnson talk about how we represent even very abstract concepts in relation to our own physical bodies. For example, we think of happiness as being up, and sadness being down when we talk about events that “lift us up” and days when we feel “down”. These metaphors we use for representing abstractions in embodied ways are so deeply ingrained in our language that we don’t even think of them as figures of speech anymore. Equally so, our reactions are highly influenced by embodied representations. Several studies have found that when people are looking at drawings of eyes, they cheat less and behave more pro-socially than when they are not. Finally, the way we act in the world and the way we perceive our actions to be ethical or not depends on embodiment. Variations of the famous trolley problem (in which a person is asked whether it is morally right to sacrifice the life of one person in order to save the life of five by using this person as a trolley-stopper) have shown that people are more willing to say that it is ethical to do so when one needs to pull a lever that will cause the person to fall in front of the trolley, than when one needs to push the person oneself.

 

All of this suggests that the reason why killer robots “sell” is because we are wired to fear moving atoms, not moving bits of information. It’s almost like we need to give our fears an embodied anchor or it’s not scary anymore. But what is the price we pay for the sensation of fear that we need to nurture through embodied representations? I believe the price is blindness to the real danger.

 

The risk of AI is very likely not going to play out as armies of robots taking over the world, but in more subtle ways by AI taking our jobs, by controlling our financial markets, our power-plants, our weaponized drones, our media… Evolution has not equipped us to deal with such ghostly entities that don’t come in the form of steel skeletons with red shiny eyes but in the form of menacing arrangements of zeros and ones.

 

In spite of our lack of biological readiness to react to such threats, we have created societies that are more and more dependent on these elusive bits of information. We no longer live solely in a world of moving atoms but we also live in a world of moving bits. We’ve become not just our bodies but also our Facebook page, our Twitter account, our internet searches, our emails etc. We no longer own just gold coins or dollar bills, but we own numbers: credit card numbers, passport numbers, phone numbers. Our new digital world is quite different from the one that hosts our bodies and it is silly to think that what is worthy of fear in it will have the same characteristics as what’s worthy of fear in this one. And just because our emails and internet searches being stored and read by others does not feel as creepy as a pair of eyes always peering above our shoulder, it doesn’t mean that it really isn’t. Just because silent and stealthy taking over by AI does not give us the heebie-jeebies quite as much as roaring armies of terminators do, that doesn’t mean it is not equally dangerous or even more so.

 

So, even if we do not feel the fear, we need to understand it. We need to be fearfully mindful not of the terminator robots themselves, but of what they hide and misrepresent.

Why AI?

This article can also be found on the Huffington Post.

I have been perplexed lately by the media frenzy on the topic of artificial intelligence (AI) and all the inflammatory statements put forth about “deadly machines” and “robot uprisings.” Of course, this can partly be explained by the public’s general taste for frivolous alarmism and the media’s attempt to satisfy it. However, I feel that besides the question of “why this general reaction?”, there is another important question worth asking: “Why this particular topic? Why AI?”

Why is AI capturing so much of our attention and imagination? Why is it so hard to have a levelheaded discussion about it? Why is the middle ground so infertile for this topic?

I have come to believe that the reason is that AI engages some of our deepest existential hopes and fears and forces us to look at ourselves in novel, unsettling ways. Even though the ways in which we are forced to face our humanity are new, the issues and questions are old. We can trace them back to stories and myths that we’ve told for ages, to philosophical questions we’ve posed in various forms throughout the centuries, or to deeply rooted psychological mechanisms that we’ve slowly discovered. Here are four of the deeper existential questions that AI forces us to ask:

What if we get what we ask for but not what we really want?

Or in the words of Coldplay’s “Fix you,” “when you get what you want but not what you need,” what happens then? The ancients were no strangers to this question. The legend says that king Midas asked the gods to make it such that everything he touches turns to gold. So the king became rich but he also died of starvation, because the food he touched turned to gold as well. AI, more specifically human or super-human AI, is that tantalizing golden touch. Any programmer has at some point experienced an inkling of it, the great power of a program that computes what it would take you several lifetimes to do — but it’s the wrong computation! Yet it’s the right one because it’s exactly what you asked for, but not what you really wanted. Welcome to the birth of a computer bug!

Superhuman AI could of course magnify this experience and turn itself into our own buggy god that would give us tons of gold and no food. Why would it do that? AI researcher Stuart Russell likes to illustrate this through a simple example: imagine that you ask your artificially intelligent self-driving car to get you to the airport as fast as possible. In order to do so, the car will drive at maximum speed, accelerate and break abruptly… and the consequences could be lethal to you. In trying to optimize for time, the car will set all other parameters like speed, acceleration etc. to extreme values and possibly endanger your life. Now take that scenario and extend it to wishes like “make me rich”, “make me happy”, or “help me find love”…

What this thought experiment should make us realize is that we blissfully live in the unspecified. Our wishes, our hopes, and our values are barely small nodes of insight in the very complicated tapestry of reality. Our consciousness is rarely bothered with the myriad fine-tuned parameters that make our human experiences possible and desirable. But what happens when another actor like AI enters the stage, one that has the power to weave new destinies for us? How will we be able to ask for the right thing? How will we be able to specify it correctly? How will we know what we want, what we really want?

What if we encounter otherness?

The issue of not being able to specify what we want thoroughly enough is in part due to our limited mental resources and our inability to make predictions in an environment that has above a certain level of complexity. But why wouldn’t our super-human machines be able to do that for us? After all they will surpass our limitations and inabilities, no? They should figure out what we really want.

Maybe… but likely not. Super-human AI will likely be extremely different from us. It could in fact be our absolute otherness, an “other” so different from everything we know and understand that we’d find it monstrous. Zarathustra tells his disciples to embrace not the neighbor but the “farthest.” However, AI might be so much our “farthest” that it would be impossible to reach, or to touch, or to grasp. As psychologist and philosopher Joshua Greene points out, us humans, we have a common currency: our human experiences. We understand when someone says: “I’m happy” because we share a common evolutionary past with them, a similar body and neural architecture and more or less similar environments. But will we have any common currency with AI? I like it when Samantha explains to Theodore in the movie Her that interacting with him is like reading a book with spaces between words that are almost infinite, and it is in these spaces that she finds herself, not in the words. Of course, the real-world AI would evolve so fast that the space between it and humans would leave no room for a love story to ever be told.

What if we transcend and become immortal but transcendence is bleak and immortality dreary?

But what if, instead of being left behind, we will merge with the machines, transcend and become immortal just like AI advocate Ray Kurzweil optimistically envisions? Spending time with people who are working on creating or improving AI, I’ve realized that beyond the immediate short term incentives to build better voice recognition or better high-speed trading algorithms etc., many of these people hope to ultimately create something that will help them overcome death and biological limitations — they hope to eventually upload themselves in one form or another.

Transcendence and immortality have been the promise of all religions for ages. Through AI we now have the promise of a kind of transcendence and immortality that does not depend on a deity, but only on the power of our human minds to transfer our subjective experiences into silicon. But as long as hopes of transcendence and immortality have existed, tales of caution have also been told. I am particularly fond of one tale explored in the movie The Fountain. When the injured, dying knight has finally reached the Tree of Life, he ecstatically stabs its trunk and drinks from it, and happily sees his wounds heal. But soon the healed wounds explode in bouquets of flowers and he himself turns into a flower bush that will live forever through the cycle of life and regeneration. But that is of course not what the knight had hoped for… It’s interesting that the final scene of the movie Transcendence also ends with a close-up of a flower, reminiscent of Tristan and Isolde and their tragic transcendence through a rose that grows out of their tombs. Of course, there are less mythical ways in which transcendence and immortality through AI could go wrong. For example, neuroscientist Giulio Tononi warns that even though we might build simulations that act like us and think like us they will likely not be conscious — it wouldn’t feel like anything to be them. Heidegger saw in death a way to authenticity, so before we transcend it and become immortal, we might first want to figure out what is authentically us.

What if we finally fully know ourselves… and make ourselves obsolete?

Another promise from AI is exactly that: authentic knowledge about what we are. AI extends the promise that we could finally know ourselves thoroughly. A great part of AI research is based on brain simulation, so if we keep forging on we might actually figure out what every single neuron, every single synapse does; and then we will have the keys to our own consciousness, our own human experiences. We will finally be able to say a resounding “Yes!” to the imperative written on the gates of the temple of Delphi: “know thyself.” The catch is that, as my husband, physicist Max Tegmark, likes to point out, every time we’ve discovered something about ourselves we’ve also managed to replace it. When we figured out things about strength and muscle power, we replaced it with engines, and when we discovered more about computation, we invented computers and delegated that chore to them. When we will discover the code to our human intelligence, our consciousness and every human experience imaginable, will we replace that too? Is our human destiny to make ourselves obsolete once we’ve figured ourselves out? Creating AI is in some sense looking at our own reflection in a pond — just like Narcissus — without realizing that the pond is looking into us as well. And as we fall in love with what we see, might we also be about to drown?

Will we figure out who we are, what we want, how to relate to what we are not, and how to transcend properly? These are big questions that have been with us for ages and now we are challenged like never before to answer them. Humanity is heading fast to a point where leisurely pondering these questions will not be an option anymore. Before we proceed in our journey to changing our destiny forever, we should stop and think where we are going and what choices we are making. We should stop and think: why AI?

GPS for The Brain

This article can also be found on the Huffington Post.

I only met The Brain fairly late in my intellectual and self-discovery pursuits, but when I did, it completely changed the way I thought about the world and about myself. I call it The Brain because I feel that the boom in neuroscience research and the media attention it gets has transformed this organ into a cultural phenomenon, into a character that one meets frequently (Google returns 2,430,000,000 results when you search for the word brain), sometimes uncomfortably (not all of us are OK with being reduced to a blob of neurons), sometimes reassuringly (some think that if scientists see it in the brain, it must be real!), a character that haunts our collective imagination. The Brain is here to trick us in sketchy accounts about right hemisphere and left hemisphere people, to genuinely disturb us in reports of new versions of Libet’s experiments on free will, and to wow us in very SF sounding like mind-reading studies. The Brain has become a rock star, and as a true rock star it’s here to rock our world, our imagination, our souls.

For all of us trying to better navigate the intricate paths of the soul, getting to meet and know The Brain can be a transformative journey in itself. It certainly was for me and I wonder if you too resonate with it. For a very long time I had an attitude of respectful indifference towards The Brain. I had a healthy fear of traumatic brain injuries and meningitis from my mom who is a doctor, but at the same time the idea that all the richness of experience, of learning, loving and being could be reduced to patterns of neural firing was more than unintuitive to me, it was downright appalling. It took many years until it dawned on me that the brain was central to the shaping of the landscapes of the mind, of the soul, of that intricate structure that I called my experience and my world-view.

I remember the moment very vividly. I was taking a neuropsychology class and we were discussing a case of Wernicke’s aphasia, reading through a transcript of a conversation with a patient. What struck me was how undisturbed the patient seemed by her inability to understand what others were trying to communicate to her. I remember the professor commenting on this and saying that the patient does not understand that she does not understand. Her language was simply gone. A stroke, a tiny damage to this blob of neurons, and to her, language just stopped existing.

These days I get to spend a great amount of time thinking about The Brain in the context of my research on autism, but I also think about it when attempting to follow the imperative “know thyself”. Here are a three of the many things I learned about myself from The Brain:

  1. Happiness belongs to the brain as well.

I used to believe that meanings were purely in the mind, that they were the results of thought processes and that they followed the logic of a thought process, but then I learned about reward systems in the brain and how differently they create meaning. Back then I was in the bad habit of always making myself predict bad outcomes: being harshly criticized for an idea should I share it, being rejected should I approach someone etc. In my mind I was doing this in order to lower my expectations and be pleasantly surprised whenever something good happened. But that pleasant surprise never came, even when good things did happen, (which was actually most of the time). Why were the meanings of my experiences so negative all the time? Why didn’t they follow the logic of my reasoning? If one looks at the way reward systems work in the brain the picture becomes more clear. In the brain, dopamine gets released whenever a prediction is actually confirmed. That’s how we learn to notice patterns in the world, that’s how we learn to associate actions with their consequences. When a prediction turns out to be wrong, the brain responds with a drop in dopamine levels, which gives rise to a negative emotion. That’s how we learn to reevaluate things. What I was doing to myself was simply cruel. Whenever something negative would happen that would confirm my prediction, my brain would reward that with dopamine. But that wasn’t enough to make me feel happy, since something bad had just happened! Whenever something good happened, my brain would get a dopamine drop to teach me to reevaluate my predictions, which in turn did not allow me to stay happy and enjoy the positive nature of the outcome! So I had to learn to make my brain happy, not just my mind.

  1. Experience should not be taken for granted.

Experience as well, became so much more complex and rich for me when I started thinking of it as the result of a brain process. Every time I climb a mountain now, I climb it in awe not just of the beautiful scenery but also of the amazing ability of my brain to calculate with such precision all the bumps in the trail and carry me to the top unharmed. But even more fundamental than the processes that allow me to navigate the world and enjoy the exhilarating experience of the mountain-top view, I am in awe of the pure existence of this experience. You’ve probably heard of people with hemineglect, who are simply not aware of half of their visual field. So when they draw a clock, for example, they only draw the left side of it. What keeps fascinating me about this condition, just like in the case of Wernicke’s aphasia, is the fact that these people are undisturbed by their condition. It’s not that they cannot see half of their world, but half of their world simply does not exist to them anymore. This has taught me that having an experience, having a world manifested in my mind is something I should not take for granted. Therefore, pondering on how a world can emerge for each of us from the firing of neurons, how this world’s features can be shaped by these firing patterns, attempting to understand consciousness as a state of the brain is to me the most exciting adventure we can embark on.

  1. Neurons deserve respect.

But how can experience emerge from the brain, from a blob of flesh? As I already confessed, this idea seemed to me very confusing and dissatisfying – so what got me fired up about it? It was firing neurons! Or rather an enactment of firing neurons. Several times I played the role of a little firing neuron in a neural network game where people “fire” (wave their hands) according to some pre-established rules. After repeated trials one can get the group of people to blindly wave in ways that correspond to recognizing letters. It’s a very intricate game and getting the network to recognize letters is a difficult process. However, one time, we managed to get our network to read a word – “no”. We didn’t mean to, but the person who was supposed to select the letters to be recognized accidentally picked two instead of one. We couldn’t believe it! Our simulated neural network could read! The idea that something as intelligent as reading can arise from a process so simple as firing according to basic rules taught me to be deeply respectful and mindful of the blob of neurons in my head, of the substrate of my mind and consciousness. This to me means things like eating right and sleeping right, taking care of these precious cells that allow me to read, to think, to love.

Learning how to think and love myself as a brain has been one of the most transformative and exciting experiences of my life. When I look back, I think of my initial indifference towards The Brain as lack of imagination. My husband likes to put it this way: thinking of one’s consciousness as firing patterns in a brain takes a leap of imagination analogous to the one it takes to realize that a cute bunny is simply rearranged grass. But once that leap of imagination is taken the repertoire of self-knowledge increases tenfold. People often say that we need to meet others in order to get to know who we are. I needed to meet this colosal character, The Brain in order to discover and get to love the brain in me.