Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is founding co-director of the Stanford Institute for Human-Centered AI (HAI) and CEO and co-founder of World Labs
Artificial intelligence is advancing at a breakneck pace. What used to take computation models days can now be done in minutes, and while the training costs have gone up dramatically, they will soon go down as developers learn to do more with less. I’ve said it before, and I’ll repeat it — the future of AI is now.
To anyone in the field, this is not surprising. Computer scientists have been hard at work; companies have been innovating for years. What is surprising — and eyebrow-raising — is the seeming lack of an overarching framework for the governance of AI. Yes, AI is progressing rapidly — and with that comes the necessity of ensuring that it benefits all of humanity.
As a technologist and educator, I feel strongly that each of us in the global AI ecosystem is responsible for both advancing the technology and ensuring a human-centred approach. It’s a difficult task, one that merits a structured set of guidelines. In preparation for next week’s AI Action Summit in Paris, I’ve laid out three fundamental principles for the future of AI policymaking.
First, use science, not science fiction. The foundation of scientific work is the principled reliance on empirical data and rigorous research. The same approach should be applied to AI governance. While futuristic scenarios capture our imagination — whether utopia or apocalypse — effective policymaking demands a clear-eyed view of current reality.
We’ve made significant progress in areas such as image recognition and natural language processing. Chatbots and co-pilot software assistance programs are transforming work in exciting ways — but they are applying advanced data learning and pattern generation. They are not forms of intelligence with intentions, free will or consciousness. Understanding this is critical, saving us from the distraction of far-fetched scenarios, and allowing us to focus on vital challenges.
Given AI’s complexity, even focusing on our reality isn’t always easy. To bridge the gap between scientific advancements and real-world applications, we need tools that will share accurate, up-to-date information about its capabilities. Established institutions, such as the US National Institute of Standards and Technology, could illuminate AI’s real-world effects, leading to precise, actionable policies grounded in technical reality.
Second, be pragmatic, rather than ideological. Despite its rapid progression, the field of AI is still in its infancy, with its greatest contributions ahead. That being the case, policies about what can and cannot be built must be crafted pragmatically, to minimise unintended consequences while incentivising innovation.
Take, for example, the use of AI to more accurately diagnose disease. This has the potential to rapidly democratise access to high-quality medical care. Yet, if not properly guided, it might also exacerbate biases present in today’s healthcare systems.
Developing AI is no easy task. It is possible to develop a model with the best intentions, and for that model to be misused later on. The best governance policies, therefore, will be designed to tactically mitigate such risk while rewarding responsible implementation. Policymakers must craft practical liability policies that discourage intentional misuse without unfairly penalising good-faith efforts.
Finally, empower the AI ecosystem. The technology can inspire students, help us care for our ageing population and innovate solutions for cleaner energy — and the best innovations come about through collaboration. It’s therefore all the more important that policymakers empower the entire AI ecosystem — including open-source communities and academia.
Open access to AI models and computational tools is crucial for progress. Limiting it will create barriers and slow innovation, particularly for academic institutions and researchers who have fewer resources than their private-sector counterparts. The consequences of such limitations, of course, extend far beyond academia. If today’s computer science students cannot carry out research with the best models, they won’t understand these intricate systems when they enter the private sector or decide to found their own companies — a serious gap.
The AI revolution is here — and I am excited. We have the potential to dramatically improve our human condition in an AI-powered world but to make that a reality, we need governance that is empirical, collaborative and deeply rooted in human-centred values.
Read the full article here