Professor in Philosophy, Social Theory, and Peace Studies, College of the Atlantic

he/him

Gray Cox is a professor of political economics, history and peace studies at College of the Atlantic where his teaching includes courses designed to prepare students to collaborate effectively in interdisciplinary projects dealing with human ecological problems in a wide variety of complex contexts and cross-cultural settings.

Cox has collaborated in a variety of projects in community organizing, peace work, election observation and sustainable development. His most recent research has focused on developing conflict resolution approaches to the problems of ethics and addressing the national security and environmental threats posed by potential breakthroughs in Artificial Intelligence. 

Talks

​​Nonviolent Alternatives in National Struggles and International Relations 

Since their initial use in South Africa and India in the early 20th Century, nonviolent methods of struggle and social change have developed in a variety of forms and have brought about dramatic transformations in Eastern Europe, Latin America, the United States, the Middle East and elsewhere.

What does current research suggest are the strengths and limitations of such methods compared with violent methods of social change and military methods of settling international disputes — and what roles may they play in the future?


Artificial Intelligence: Making ‘Smart Systems’ Wiser

How can our local and global systems of defense, health, food, education, etc. become resilient, secure, life enhancing and wiser amidst the rush to ever “smarter” and relentless pursuit of limited goals like military dominance, profit, or high test scores?


Threats to National Security Posed by Artificial Intelligence and Robotics 

Entrepreneur Elon Musk has said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.” Scientist Stephen Hawking has noted: “success in creating AI could be the biggest event in human history,” but cautioned that “it might also be the last, unless we learn how to avoid the risks,” warning before long this technology may be “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand”. He commented further, with others, in a HuffPost column that: “If a superior alien civilization sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here — we’ll leave the lights on”? Probably not — but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing ever to happen to humanity, little serious research is devoted to these issues outside small non-profit institutes . . . “What are the range of security issues at stake and how can they best be addressed?