The The Transform Technology Summits begin October 13 with Low-Code / No Code: Enabling Enterprise Agility. Register now!
Earlier this summer, the National Artificial Intelligence Research Resource (NAIRR) working group posted a request for information (RFI) on how to build an implementation roadmap for a shared research infrastructure in AI. In addition to requests for ideas on how to own and implement this agenda, he asked for advice on how best to ensure the protection of privacy, civil liberties and civil rights in the future. . To achieve this goal, resources for education and training in values-based ethical reasoning must be at the heart of the Task Force’s strategy.
What’s at stake
The passage by Congress of the National Defense Authorization Act for Fiscal Year 2021 directing Biden’s White House to create the NAIRR Task Force could have as many ramifications for America’s democratic ideals as many wars, politics and civil rights movements in our past.
Although the first public announcements of the NAIRR task force do not explicitly refer to foreign governments, do not be mistaken that geopolitical competition with China, Russia and other nation-states is urgently heavy. of its mission.
Since the Manhattan Project and the atomic bomb development race, a technology has not been as important in its potential to reshape the balance of power between Western democracy and what the Institute for Human-Centered Artificial Intelligence de Stanford calls “digital authoritarianism. âSimilar to the nuclear arms race, the path the United States takes to develop and deploy this technology will determine the extent of freedom and quality of life for billions of people on Earth. The stakes are so high.
The precedents are clear
While the stated delivery date for the NAIRR Working Group report and roadmap is not before November 2022, it is important to keep in mind that ensuring an AI ethics that respects the values ââof the America is a long process, and extremely essential to American identity. Yet the precedent of an ethical and inclusive roadmap is written in our history, and we can look to the military, medical and legal professions for examples of how to do this successfully.
The military. On July 26, 1948, President Harry Truman issued Executive Order 9981 to initiate the desegregation of the military. This led to the creation of the Presidential Committee on Equal Treatment and Opportunity in the Armed Forces and one of the most important ethics and values ââreports in US history. But it should be noted that it was not until January 2, 2021 that retired 4-star General Lloyd Austin III was named the first Black Secretary of Defense. Integrating American ethics and values ââinto AI-related disciplines will require the same sustained and tireless effort.
The medical field. The American Medical Association (AMA) Code of Medical Ethics is considered the gold standard for ethics and values ââin a professional discipline, dating back to the fifth century BCE and to the ideals of the Greek physician Hippocrates to ârelieve suffering and promote well -being in a relationship of fidelity with the patient. Despite this deep and rich history of ethics at the heart of medicine, it wasn’t until 1977 that Johns Hopkins became the first medical school in the country to implement a compulsory course in medical ethics into its core curriculum. .
The law. Bar associations began to introduce codes of ethics for lawyers and judges in the United States in the early 1800s, but it was not until the early 1900s and widespread adoption of the Harvard Method in law schools, legal ethics were linked to professional responsibility, and a clear set of moral duties to society were built into legal education and the profession.
The road ahead
The disciplines related to AI (computer science, engineering and design) lag far behind other professions in terms of ethical requirements, education and training. There are, however, dozens of promising tech ethics-focused organizations and initiatives working to promote and unify education and training in ethical reasoning in AI.
Higher Education. Integrating ethics and values ââtraining into the core curriculum of every college-educated engineer, designer, and computer scientist should be a central goal of any national AI strategy.
To this end, The Markkula Center for Applied Ethics at the University of Santa Clara is one of the most prolific producers of technological ethics degree programs, case studies, and decision-making training for students and practitioners. Likewise, MIT began to develop a ethics program to this end and should also be consulted during the implementation planning process. What’s more, ethics in AI institutes are created around the world and represent fertile ground for adding resources to the NAIRR working group.
While most of these efforts focus on higher education and current professionals, the working group also has the opportunity to start sharing values ââand ethics resources with major secondary education programs focused on STEM emerging across the country. The STEM Education Committee of the National Science and Technology Council has Underline the need for more ethics education at all levels of STEM education, and the NAIRR working group has the opportunity to distribute and unify these resources.
Public-private partnerships and consortia. Leading public-private and professional organizations are developing top-notch offerings that train AI professionals in methods to create ethically sound AI. Consulting with these external groups will be essential as the NAIRR Working Group moves forward with its national AI strategy.
For example, the World Economic Forum (WEF) Shaping the Future of Tech Governance: Artificial Intelligence and Machine Learning Platform has a significant impact on governments and businesses around the world through advice, publicly available research, white papers, ethical toolkits and case studies. These products can help accelerate the benefits and mitigate the risks of artificial intelligence and machine learning.
Likewise, the Responsible AI Institute (RAI) created the first independent and accredited certification program for responsible AI. In fact, RAI has already been called upon by the Joint Artificial Intelligence Command (JAIC) of the United States Department of Defense to incorporate ethical and values-based principles. responsible AI guardrail in procurement practices.
In the future, it will take us years to integrate ethics and values ââinto professional disciplines related to AI, but it is possible. As the NAIRR Working Group establishes its roadmap, the team should reference our history, provide resources for academic ethics training, tailor this training to high school STEM programs, and work with professional organizations to inculcate the best in-class materials to perfect those currently in training. Industry. If we are to win the race for AI innovation while upholding our democratic principles, we have to start here and we have to start now.
Will Griffin is the Ethics Director of Hypergiant, an AI company based in Austin, Texas. He received the IEEE 2020 Award for Outstanding Ethical Practices and created the Top of Mind Ethics (TOME) framework for Hypergiant, which won the Communitas Award for Excellence in AI Ethics.
VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the topics that interest you
- our newsletters
- Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
- networking features, and more
Become a member