With 47% of businesses surveyed integrating artificial intelligence (AI) into their operations and 78% planning to increase investments in the technology in the near future according to McKinsey & Company, the race is on to create not only capable artificial intelligence but ethical AI. Yet nobody has distinguished what maturity means vis-a-vis the ethical development and use of AI or the steps to get there. "Maturity" isn't a word that belongs only to wine connoisseurs characterizing their best libations or Wall Street brokers characterizing the ultimate pay-out of their financial instruments. It is a word that characterizes every pregnant mother coming to term and every football team arriving at the Super Bowl. "Maturity" is the destination, and the steps to achieve maturity can be described easily as a "model” once maturity is distinguished. The term "maturity model" is jargon that originated in the late 1980's when the U.S. Department of Defense funded the research to evaluate the ability of contractors to develop and deliver their goods. Soon thereafter, a Cambrian-like explosion of maturity models elaborated everything from software development productivity, logistics, and smart grid modernization, to strategy-implementation-through-projects. The time has come for an ethical AI maturity model.
Our aim is to help users assess and develop capabilities for creating and implementing AI ethically. If the example of a college course curriculum is a "maturity model," then the grading rubric for such a course is a corresponding assessment protocol. Just as maturity models have proliferated, so has a myriad of cottage industries associated with assessing the maturity of this or that to characterize the position of anything in its teleology. What it means for AI to be "ethical" can be distinguished and evaluated for any application, whether that is driverless cars, face recognition, or AI enabled medical diagnosis. It can be assessed from the varied perspectives of sponsors, designers, developers, suppliers, government authorities, users, beneficiaries, and other stakeholders, helping these many roles become aligned.
To distinguish what it means to exhibit requisite capabilities in ethical AI from all perspectives and to distinguish the steps from lesser capability to greater capability, one must reverse engineer choice architectures for vexing questions. e.g. "should a driver-less vehicle faced with a no-win crash scenario sacrifice its own passengers versus another's?" Or "is face recognition software that invades privacy unacceptable for commercialism but plausible in public safety situations?" And "how should AI enabled medical diagnosis balance the interests of patients and insurers?" In all cases, where does data come from to feed the AI, how should the capabilities of AI-based decision-making be directed, and how should competing interests be arbitrated? How should participants in all roles of the AI ecosystem parse the needs of the one versus the needs of the many? Are answers to these questions universal or do they vary by ethnography?
By some accounts, ensuring AI is ethical is an existential issue for humanity. Elon Musk famously said that everyone racing to develop AI is “summoning the demon,” running the risk of creating something beyond human control that could be a moral hazard with mortal consequences. We essentially wish to “unsummon” the demon, cultivating instead the intent and capability to base AI on the better angels of our nature. To that end, our vision is to create a widely and enthusiastically endorsed maturity model recognized worldwide as the standard for developing and assessing ethical AI. Our mission is to develop an open-source Ethical Artificial Intelligence Maturity Model or E-AIMM that provides methods for assessing and developing capabilities to ensure the ethical development and ethical use of AI, promoting successful, consistent, and predictable ethical behavior by all stakeholders and AI's. For this purpose, we are creating a global community of participants who will contribute to development of the model, an assessment protocol, certifications, benchmark data, and conferences.
Join us for the kick-off of the E-AIMM Program on August 24, 2019 in Atlanta and become a member of the E-AIMM Program team. Come hear keynote speaker Chris Benson, Lockheed Martin’s Chief Strategist for AI Ethics, and diverse panels of leaders in the field. We will prototype the model based on their input and yours that very day and then announce next steps to complete the model. Reserve the date now!