Source: Free Range Stock

Recourse for AI

I’ve had quite a few conversations recently where practitioners are looking for ways to design feedback loops into governance systems for the AI technologies they deploy. There’s a number of ways to do this, but the advice I’ve found to be universal is quite simple to explain, but can be politically sensitive to design and deploy.

First, let’s talk about what is meant by “recourse.” Recourse is the way a user of an AI technology is able to inform the creators of the technology that something unexpected, undesired, unsafe, or anomalous happened. The easiest way to think about it is a big red button on a humanoid robot. When the robot says or does something inappropriate, a human can push a big red button and immediately talk the robot’s creator.

It’s important to note that the “big red button” analogy works well because it’s a clear signal to the user of how to access recourse, the recourse they access is immediate, and it offers an ability to communicate directly with a responsible party. These aspects about the ability to access recourse are of paramount importance.

Considering this notion of ease of access, there are three target populations a user should be able to connect with — note, there are others, but there are three that really matter for recourse to not only be effective, but also a trust-building feature: (1) responsible engineers; (2) internal governance body; (3) external regulators.

Each of these require a progressively greater amount of unpacking. First, engineers are the ones who built the system; they’re the ones who most intimately understand how it works, what the design trade-offs were, and are the final say in what constitutes anomalous behavior of the system. They’re also the ones who can rectify the problem in the shortest amount of time. The second responsible party for recourse actually requires that a robust data or AI ethics governance body to actually exist (which is part of the point). The internal governance body will have representation from the executive, legal, and engineering teams as well as multidisciplinary stakeholders from throughout the company, all of whom might have some benefit to glean from this instance of anomaly; what’s more, this team should also be seeing regular reports of anomalous behavior, so they will understand the broader context. And the third item leads to some of the more vigorous conversations…

“Well, we’re not regulated. So, who would this be?” is a fair concern. Again, this is part of the point. If, as an industry, we’re deploying technologies that need a “recourse button,” we, as an industry, should be regulated. Therefore, it’s incumbent upon those building these technologies to lobby their government officials on sensible and strategic regulatory priorities that safeguard the opportunities for innovation while protecting the entire industry from the misdeeds of a single company or bad actor. We’re all in this boat together, so we should have a set of customs (or rules) that prevent others from putting holes in the hull of our shared vessel.

The companies that get this and lead sooner rather than later will be the companies that get to set the standards for their current and future competitors.

Steven Tiell

Steven is the founder of this blog. He's a technology and business strategist who started exploring data ethics in 2013 and has been publishing thought leadership in the space since 2014. In addition to his day job (where he's spent 100%+ of his time on data ethics and responsible innovation since the Cambridge Analytica scandal in 2018), Steven also serves on the board of a grass-roots community organization, and serves as advisor to larger organizations through personal and professional commitments.

Steven Tiell has 7 posts and counting. See all posts by Steven Tiell

);