Tuesday 11 August 2015

Artificial Intelligence should benefit society, not create threats

Artificial Intelligence should benefit society, not create threats

Toby Walsh, NICTA
Some of the biggest players in Artificial Intelligence (AI) have joined together calling for any research to focus on the benefits we can reap from AI “while avoiding potential pitfalls”. Research into AI continues to seek out new ways to develop technologies that can take on tasks currently performed by humans, but it’s not without criticisms and concerns.
I am not sure the famous British theoretical physicist Stephen Hawking does irony but it was somewhat ironic that he recently welcomed the arrival of the smarter predictive computer software that controls his speech by warning us that:
The development of full artificial intelligence could spell the end of the human race.
Of course, Hawking is not alone in this view. The serial entrepreneur and technologist Elon Musk also warned last year that:
[…] we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.
Both address an issue that taps into deep, psychological fears that have haunted mankind for centuries. What happens if our creations eventually cause our own downfall? This fear is expressed in stories like Mary Shelley’s Frankenstein.

An open letter for AI

In response to such concerns, an open letter has just been signed by top AI researchers in industry and academia (as well as by Hawking and Musk).
Signatures include those of the president of the Association for the Advancement of Artificial Intelligence, the founders of AI startups DeepMind and Vicarious, and well-known researchers at Google, Microsoft, Stanford and elsewhere.
In the interests of full disclosure, mine is also one of the early signatures on the list, which continues to attract more support by the day.
The open letter argues that there is now a broad consensus that AI research is progressing steadily and its impact on society is likely to increase.
For this reason, the letter concludes we need to start to research how to ensure that increasingly capable AI systems are robust (in their behaviours) and beneficial (to humans). For example, we need to work out how to build AI systems that result in greater prosperity within society, even for those put out of work.
The letter includes a link to a document outlining some interdisciplinary research priorities that should be tackled in advance of developing artificial intelligence. These include short-term priorities such as optimising the economic benefits and long-term priorities such as being able to verify the formal properties of AI systems.

The AI threat to society

Hollywood has provided many memorable visions of the threat AI might pose to society, from Arthur C. Clarke’s 2001: A Space Odyssey through Robocop and Terminator to recent movies such as Her and Transcendence, all of which paint a dystopian view of a future transformed by AI.






My opinion (and one many of my colleagues share) is that AI that might threaten our society’s future is likely still some way off.
AI researchers have been predicting it will take another 30 or 40 years now for the last 30 or 40 years. And if you ask most of them today, they (as I) will still say it is likely to take another 30 or 40 years.
Making computers behave intelligently is a tough scientific nut to crack. The human brain is the most complex system we know of by orders of magnitude. Replicating the sort of intelligence that humans display will likely require significant advances in AI.
The human brain does all its magic with just 20 watts of power. This is a remarkable piece of engineering.

Other risks to society

There are also more imminent dangers facing mankind such as climate change or the ongoing global financial crisis. These need immediate attention.
The Future of Humanity Institute at the University of Oxford has a long list of threats besides AI that threaten our society including:
  • nanotechnology
  • biotechnology
  • resource depletion
  • overpopulation.
This doesn’t mean that there are not aspects of AI that need attention in the near future.

The AI debate for the future

The Campaign to Stop Killer Robots is advancing the debate on whether we need to ban fully autonomous weapons.
I am organising a debate on this topic at the next annual conference of the Association for the Advancement of Artificial Intelligence later this month in Austin, Texas, in the US.
Steve Goose, director of Human Rights Watch’s Arms Division, will speak for a ban, while Ron Arkin, an American roboticist and robo-ethicist, will argue against it.
Another issue that requires more immediate attention is the impact that AI will have on the nature of work. How does society adapt to more automation and fewer people needed to work?
If we can get this right, we could remove much of the drudgery from our lives. If we get it wrong, the increasing inequalities documented by the French economist Thomas Piketty will only get worse.
We will discuss all these issues and more at the first International Workshop on AI and Ethics, also being held in the US within the AAAI Conference on Artificial Intelligence.
It’s important we start to have these debates now, not just to avoid the potential pitfalls, but to construct a future where AI improves the world for all of us.
The Conversation
Toby Walsh is Professor, Research Group Leader, Optimisation Research Group at NICTA.
This article was originally published on The Conversation. Read the original article.

Friday 7 August 2015

Free Art Source - OpenGameArt.org!

OpenGameArt.org is a website with the purpose of providing "a solid (and hopefully ever-expanding) variety of high quality, freely licensed art, so that free/open source game developers can use it in their games."

It has several different kinds of art available on the site - 2D art (including pixel-style art), 3D art, concept art, textures, music, sound effects, and documents such as tutorials.

The search bar allows you to search by the license that the artwork has been released under - if you need something that's compatible with what you want to do with the artwork.  CC0, or the public domain, is generally the freest possible license, but it's worthwhile finding out about the terms and conditions of the other licenses to find out if they can suit your purpose.  Licenses which are just "CC-BY" and a version number are usually pretty easy to comply with, as well - you have to say who created the artwork and link back to them.  A list of all the licenses available on the site, and what they mean, is available under their FAQ.

There are some tagging features, but I haven't been able to find a tag cloud yet - you might like to browse through the animatedsidescroller or platformer tag for potential critter sprites.

There are some oddities with spriting for the Creatures series that these sprites may need to be altered to fit - in particular, pure black  appears transparent in-game.  However, OpenGameArt has a wide variety of free art which could easily be adapted into Creatures COBs and agents.

Wednesday 5 August 2015

Creature-Friendly Principles - Consistency and Readiness

A Bear of Very Little Brain and Friends at the New York Public Library

Courtesy of Flickr users Tony and Wayne.

Norns, Grendels, and Ettins are bears of very little brain.  Even since the earliest days, the classification system has helped creatures experience their world.  To a creature, all toys are the same, and all food the same.

This has a few implications.

Creatures learn what they can do with any given object genus (i.e. they learn that they can eat food or push toys, once they come across enough edible foodstuffs or pushable toys to learn that this is possible).

Creatures also learn what to expect from genuses.    They expect food to satisfy their hunger.  They expect fire to be painful.  They expect toys to be fun.  They can't tell if an item is 'ready' for them or not.

When deciding what your object should do, it is helpful to look at the official scriptoriums (scriptoria?) of the worlds.   Keep in mind that several of the games were built by a team, so some genuses won't be fully consistent.   The Creatures Development Standards for C3/DS are a good place to start, as well.  You're looking for what actions a creature can take on an object, as well as what chemicals and stimuli result from that.

If you want to make it so that items seem to be ever-ready for the creatures, that can be accomplished by not designing your objects so that they're 'out of commission' for a long time, or by making them invisible for the period of time when they aren't ready for action.  That way, creatures won't learn that pushing food doesn't always work, and try other actions instead. 

For example, a DOIF command can be used with the POSE command to make an apple blossom invisible by keeping its attributes invisible until the timer script runs through all the growth poses. At this point, in C3/DS, smells are added too, to make the object perceptible through the winding corridors of the Ark.  Similarly, a mature herb can be installed as being visible, and when it is eaten, toggled to invisible until the herb is mature again.  A lot of concepts from C3/DS coding are retrofittable to C2 or C1 coding, if appropriate commands exist in the earlier versions.