Marek Rosa is the CEO and CTO of GoodAI, a research company building general AI. In this interview he speaks about the dangers of militarized AI, the future of automation and how close we are to achieving general AI.
Do you believe that last week’s open letter, signed by a raft of Silicon Valley executives, adequately articulates the current issues in the harnessing of AI?
The recent letter to the UN is a good starting point to open up the issue of safety to wider discussion. The capabilities of AI and robots are improving very fast and often in unpredictable directions.
If automated weapons become available and easy to use by anyone, they could dramatically change the dynamics of not just warfare but of terrorism too. For example, cheap and intelligent aerial drones could be augmented with explosives, produced in millions and send autonomously to search and destroy targets.
We need to start thinking about the future now and not wait until it is too late because, as described in the letter, the consequences could be unthinkable for humanity.
How do we best safeguard against the kinds of AI dangers envisaged by Elon Musk and others?
The key here is collaboration and preparation. Although the risks might seem a long way away, we need to start working with companies and governments to ensure that policy is implemented now, before we reach a tipping point. It is great that these 116 companies from across the world have come together, but we must keep up the pressure to ensure that safety is at the top of everyone’s agenda.
When we talk about battlefield AI and any other forms of AI, are we conflating two separate arguments?
Although the letter specifically focused on the militarization of AI, we are potentially facing similar safety issues in the business world too, as companies and developers race to create the first general artificial intelligence. The worry is that these companies and researchers are so intent on creating the first general AI agent that they might potentially neglect the safety aspects in favor of faster deployment.
At GoodAI we are also trying to bring this issue to the top of the agenda, so that it is openly discussed in public forums, and an interdisciplinary approach can be taken to finding solutions. In November we will be launching Round two of our worldwide General AI Challenge, where we will ask participants to come up with a proposal of practical steps that can be taken to avoid the negative effects of the AI race and make sure general AI is developed safely.
Another issue is the weaponization of machines that are not intended for this purpose. For example, a driverless car could be reprogrammed and used as a deadly weapon. This could be the case with many different types of machines. Furthermore, if these machines malfunction in any way we could be facing deadly accidents so safety needs to be talked about across all AI development.
Do you believe the current conversation about AI and autonomous vehicles and factories is factoring in enough social implications? Is the tech industry being too blasé, or cautious, as to the societal effects of autonomy?
I think the tech industry is very much aware of the societal impacts of autonomy and motivated to address them. They propose discussions on the effects on jobs, and since they understand that the economy is going to be transformed heavily by AI and automation, they proactively enable new products and services through disruptive technologies to help improve overall living standards in the fast changing world.
GoodAI wants to give robots a set of values to apply to everyday encounters. What are we missing to make this reality–and how can we ensure these values are implemented by the right people?
We are still missing the right algorithms: an AI that would be able to learn and sustain all those complex and useful behaviors and values that we would teach it. Such an AI needs to have the capability of gradual learning: acquiring new skills without forgetting what it learned before, building on top of prior knowledge, and effectively reusing and recombining its abilities to readily react to unseen problems.
General AI will have the values of its creator. To increase the chance that we, or any other good guys from the known AI community, succeed with general AI development, and not some villain, the best we can do is to work hard on both algorithms and safety, and not postpone the development without reason.
I don’t have doubts that publicly-known developers have good intentions, but we all need to be extra careful and not get carried away by the vision of success. We should not hastily deploy a powerful AI without testing properly whether we got the safety and the values right, although the spirit of competition and the market might pressure might tempt us to do otherwise.
How closely will the minds of humans and robots be linked in the future? Do you see a world in which humans are divided starkly from robots?
People often imagine this this scenario where humans and robots are divided, however, I would like to stress that AI should be a tool for humans to use, or an augmentation of our own natural intelligence. It should be something that we use to improve our own abilities and should not become a separate species as such.
At GoodAI we have founded the AI Roadmap Institute, which works collaboratively with other organizations and, as well as studying and comparing technological roadmaps to general AI, creates roadmaps envisaging what the future of AI might look like. We envisage all possible scenarios and work to figure out how we can ensure the less favorable scenarios do not manifest themselves.
Which areas of society do you feel will be most affected soon by robotics and AI?
It is difficult to tell exactly what will be affected “soon” by robotics and AI. What we will start to see more of everyday is narrow or specific AI. This is AI which has been programmed to do very specific tasks, but cannot do much else. For example, a narrow AI agent that is a chessmaster will not be able to transfer those skills to play the traditional Chinese game Go and vice versa.
At GoodAI we have started a sister company GoodAI Applied which will focus on using the AI research we carry out to create specific applications while leaving our researchers free to continue their work into general or “true” AI.
In terms of specific AI, I would be surprised if we do not start to see all businesses and governments using it. Specific AI tends to be good at processing and analysing extremely large amounts of data and finding patterns or solutions to well-defined local problems.
General AI has not been created yet. But when it is, it will be pivotal to the development of technology, science and society. In research it is becoming more apparent that we are reaching the limits of the human brain. Questions about understanding the universe or even processes within the human body are still unexplained. General AI will be able to take a much wider view on these questions, optimizing the process in technological and scientific discovery and taking us to a level beyond what is manageable using current technologies.
Are there any aspects of AI that have surprised you, or have changed your mind, since founding GoodAI?
One of the positive surprises is what deep learning, a lot of computing power and training data were able to accomplish in narrow AI domain. For example, AI that learns to generate fake images and video.
Another thing that has surprised me is the interest that AI has received in last two years. I wasn’t expecting this: it’s a mainstream topic now, from technology perspective but also safety and societal. Today we can publicly discuss topics that would be considered fringe science just a few years ago.
Also many researchers have expanded their focus from narrow AI to what we at GoodAI call general AI, even though many of them don’t call it that.
All these surprises are positive. One negative surprise would be how some people are trying to depict AI only in a negative way, showing only the hypothetical risks. I think planning the future is important, but raising panic is not necessary, especially when we are discussing very hypothetical situations.
What are the next technological steps that will determine the path down which AI goes from here?
The next step on the route to general AI is to demonstrate gradual learning: the ability of an AI agent to acquire new skills and knowledge step by step by reusing skills it has already learnt and improving itself. At GoodAI we teach our AI agents in a school with a specific curriculum of tasks that we believe will be useful for it in the future, just like children are taught in a school. The goal for an AI is not to memorize any specific facts, but to build up useful and general skills for more efficient understanding, problem-solving, and further exploration.
Instead of designing AI that can solve specific and narrow problems, we aim to develop AI that is capable of designing other AIs. The plan is not just to create AI but what I like to call meta-AI.