What is the Responsible AI Lab?
This experimental newsletter is a laboratory that will discuss Responsible AI, the most significant issue of our time.
My perspective in helping organizations have a bigger impact is informed by the lens of economics as well as social and computer science.
What is the social impact of AI?
Towards Responsible AI
The problem of algorithmic bias is that many algorithms replicate the biases humans tend to have. Biases against women. Discrimination against people of different ethnicites, ages and other characteristics that are legally protected around the world.
Algorithms are as irresponsible as society tends to be.
The stakes are high, since algorithms are influencing and even driving decision making in a variety of spheres, from creditworthiness and other judgment calls in the economy to judicial decisions such as whether to let someone out on parole, and many others.
We execute such social and psychological biases so subconsciously offline that they show up in the algorithms and products we engineer, deploy and use online.
What is being done about algorithmic bias?
The major tech companies, Facebook, Amazon, Microsoft, Google and others have dedicated units focused on Responsible AI: algorithms that are fair, responsible, ethical and transparent. For now, these are internal departments that attempt to inform decision making in the core AI teams, but have varying degrees of influence and empowerment to actually significantly shift outcomes.
This is an emerging space with more that is not yet understood than the sliver that experts have mastered thus far. There are interesting dynamics of incentives that complicate affairs in a hyper-competitive industry. Companies must understand their impact in world where consumers are interacting with contexts off the internet the inform how they behave online. The lines between online and offline spheres are increasingly blurred.
The social impact of AI is the hottest part of AI, machine learning and technological innovation more broadly, because for better or worse, the future of AI in particular and that of the economy in general increasingly depends on it. The intellectual space is transcending programming and code and extending to what we know about humans and our interactions with one another. Unfortunately, the expertise is scarce, in part because it is interdisciplinary. The social impact of AI cuts across AI, the economy, politics and society, which are often thought of as being in separate silos. Within all of the tech companies and beyond them, a major challenge is getting experts across different disciplines to understand one another.
The social impact of AI is the hottest part of AI, machine learning and technological innovation more broadly, because for better or worse, the future of AI in particular and that of the economy in general depends on it.
Responsible AI is, however, even more important for tech startups, companies and other newer organizations like non-profits, governments, and others that are increasingly using AI. Such units really can't afford a giant ethical misstep that could sink the entire organization in a legal scandal or epic economic tailspin. These organizations all have their unique contexts, so just copy-and-pasting what worked for a particular tech company will often not be satisfactory in a very different environment.
These are also challenges that firms face that are under explored, and best addressed from an independent platform such as mine. How is a platform, app or innovation affecting society? This is what many organizations urgently need to know, to avoid being overtaken by events when it is too late. The antitrust hearings which are significantly affecting Google, Facebook, Amazon and Apple, to all of the company boycotts, to various mental health issues can all be traced in one way or the other to significant gaps in stakeholders’ understanding on the social impact of AI. Although human discrimination predates AI, this overarching issue of social impact is inextricable from the outcomes of employees, leaders and stakeholders of not being representative of the larger society in terms or race, gender, age, and other demographics.
Most firms’ stakeholders would rather not have their systems reflect gender or racial and/or other biases in a time when going viral for the wrong reasons can be costly and even fatal. Most tech employees are not malignant and do not actually realize they are building biases into their metrics until after the fact. And that is if they ever do. Even when suspicions arise, many have significant difficulties in identifying and unpacking such discrimination in a rigorous way.
Even in cases where the professionals are adequately informed, however, employee researchers are as human as everyone else. They not exempt from perverse incentives, such as the motivation to tell corporate leaders mostly what they want to hear and not necessarily what they need to know. Such outcomes have motivated a movement towards research transparency but much work remains. These issues are not limited to the tech sector as societies become increasingly algorithmic in scope. To illustrate, government policy makers face similar constraints when they think about voting outcomes in a democracy, for example.
An independent perspective based on rigor is key, but rare. Economic and social inequality is a growing worry for many and the concern about the future of work is that there will be no work in the future. Other relevant issues include privacy, security, surveillance, discrimination and others. Hearing the voice of every individual will be key to staying ahead of various concerns and helping to ensure that AI has a positive impact.
This newsletter will help you better understand the social impact of AI on your world, so that you are better prepared to meet or exceed your needs, and that of your users and other stakeholders.
I look forward to sharing my expertise in this exciting field with you.
Send any requests about AI social impact and Responsible AI to kwekuknows@gmail.com
This newsletter does not represent the views of any person or organization mentioned here.
My research is entirely independent and separate consulting services are provided to organizations on a case-by-case basis. Email kweku2008@gmail.com for inquries.