Thought Leaders Forum: How Do You Define Global Ethics?

Thought Leaders Jonathan Haidt, Rachel Kleinfeld, Iam Bremmer, Carne Ross, Andrew Nathan, Anne-Marie Slaughter, Michael Walzer, Gillian Tett, and Brent Scowcroft describe their visions of a global ethic.

Views: 170

Comment

You need to be a member of Global Ethics Network to add comments!

Join Global Ethics Network

Comment by Ted Howard on May 26, 2015 at 2:34pm

Agree in part with Michael Walzer - that within the economic and governance frameworks we have today peace is not a stable or high probability outcome. And to me that simply means that it is time that we take the choice to transcend dominance based hierarchical governance for distributed and consensus based governance; that we transcend scarcity and competition based economics for abundance and cooperative based systems empowered by advanced automation that deliver abundance and security to all, no exceptions. That option wasn't available even 50 years ago, it is now.

Agree with Bineta Diop, that until the vast majority are prepared to reject violence (except in response to violence, and than only to subdue, not to kill), then peace will elude us. And to me, that is relatively easy to achieve once one understands the profound evolutionary power of cooperation, and the need for multi-level attendant strategies to attend cooperation to prevent cheating.


Agree with Rebecca MacKinnon that peace means the elimination of violence at all levels of social organisation - individual upward, except in so far as violence is required as a response to prevent further violence. I suspect that at ever more abstract levels it will always be the case that the price of liberty will be eternal vigilance.

Disagree with Thomas Pogge. I acknowledge the existence of parties who have interests in maintaining conflict, and see that those parties essentially have a very short term view of self interest. When one extends the view of self interest, peace is the necessary outcome. It is always possible that there will be more powerful entities out there somewhere that will perceive you as a threat if you are not acting peacefully to all those around you. That is true on world scales, galactic scales, cosmic scales, and perhaps even multiverse scales. When one has a reasonable probability of living a very long time, then such considerations become very relevant.

Agree with Gillian Tett in a sense, that world peace is not a stable concept within the economic framework we have in place at present. And I suspect my rationale goes far deeper. We need to transcend market based economics. It is time for post scarcity thinking.

Agree with Ethan Zuckerman, that peace without empowered freedom is tyranny. Our primary allegiance needs to be to sapience before any allegiance to nation or family or any other sort of grouping.

Agree with Carne Ross in a sense, that it is not possible to achieve peace within existing economic and governance frameworks.

Disagree with Jay Winter and Kant in a very deep sense, and see no need for brushfire wars. And certainly real freedom demands an ability to make mistakes, and peace will only ensue to the degree that the deepest of thinkers and actors are prepared to act in their own and the common communities long term best interests. I am not talking about any sort of perpetual sameness, and I am talking about a level of awareness that can clearly see that systemic violence does not serve the long term self interest of any group or subgroup - not really.

Agree with Peter Morales that peace involves developing collaborative structures (but not interdependence, it actually needs independence).

Agree with Kishore Mahbubani that war becomes less likely to the degree that we can all see that it is not in our self interest.

Agree with Nancee Birdsall that we have the potential to achieve the goal, and it is in the self interest of all of us to do so.

Ideapod Discussion

My Blog on this topic

Carnegie Council

The Future of Artificial Intelligence, with Stuart J. Russell

UC Berkley's Professor Stuart J. Russell discusses the near- and far-future of artificial intelligence, including self-driving cars, killer robots, governance, and why he's worried that AI might destroy the world. How can scientists reconfigure AI systems so that humans will always be in control? How can we govern this emerging technology across borders? What can be done if autonomous weapons are deployed in 2020?

Killer Robots, Ethics, & Governance, with Peter Asaro

Peter Asaro, co-founder of the International Committee for Robot Arms Control, has a simple solution for stopping the future proliferation of killer robots, or lethal autonomous weapons: "Ban them." What are the ethical and logistical risks of this technology? How would it change the nature of warfare? And with the U.S. and other nations currently developing killer robots, what is the state of governance?

As Biden Stalls, Is the "Restorationist" Narrative Losing Ground?

U.S. Global Engagement Senior Fellow Nikolas Gvosdev notes that former Vice President Joe Biden is, in foreign policy terms, most associated with a "restorationist" approach. How does this differentiate from other candidates? What approach will resonate most with voters?

SUBSCRIBE TODAY

VIDEOS

SUPPORT US

GEO-GOVERNANCE MATTERS

© 2020   Created by Carnegie Council.   Powered by

Badges  |  Report an Issue  |  Terms of Service


The views and opinions expressed in the media, comments, or publications on this website are those of the speakers or authors and do not necessarily reflect or represent the views and opinions held by Carnegie Council.