There has been, and should be, a lot of talk about Ethical AI. Over the last several weeks, I have been revising Sturdy’s Ethical AI policy. I am trying to convey that we don’t do shady stuff and won’t let our customers do it, either.
(If you are interested in Ethical AI, we have a webinar coming up at the end of the month; the registration link is in the comments)
Writing the policy, I realized we need to talk about ethics writ large, not just as it relates to AI.
Consider the case of Allstate and Arity, as reported in a June 9 NYT story, “Is Your Driving Being Secretly Scored?” Allstate apparently owns Arity. Arity builds phone apps for things like finding gas stations. Their apps also track how you drive, although they bury that minor detail in their “consent” pages (that no one reads). They then share this data with Allstate.
Not a lot of gray area here. This is unethical.
My co-founder, Joel Passen, coined this mantra at our first startup 20’ish years ago:
“Build what you’d want to use, sell it how you’d want to be sold, and service it how you’d want to be serviced.”
I don’t think anyone downloading a Gas Station finder app wants their driving to be sent to Allstate. I would not. And I would not build it.
So, instead of an “Ethical AI” policy, I’ve decided we need an “Ethical Software Policy”. It will encompass our use of AI, our platform, and how we expect our software to be used.
Here’s a bit of a summary so far…
The problem is that many “Ethical Policies” are only as good as the paper they are written on. They are a checkbox on an RFP. None of us want to live in this world. Maybe it's time to try and live in a better one.
It is hard to say “no” to revenue. Do the hard things.
Let me know your thoughts.
Steve