In a world swept up in heated debates about the risks artificial intelligence might pose to our jobs, our society, and our sanity, one might assume that AI is a relatively new concept and that its rapid emergence occurred out of nowhere.
But as Chris Wiggins, professor of applied mathematics and systems biology at Columbia University, points out, that couldn’t be further from the truth. “AI isn’t new,” he said, speaking about his new book, How Data Happened, co-authored with Matthew Jones, the James R. Barker Professor of Contemporary Civilization. “It’s a moving target. It means different things to different communities at different times.”
In mid-July, Wiggins spoke at a Columbia Business School event, hosted at Dear Mama Coffee in Manhattanville and facilitated by Professor Bruce Kogut, as part of the School’s Business, AI, and Democracy series.
Wiggins explained that the technologies that now shape our daily realities — from facial recognition that checks people into flights and identifies undocumented residents to automated decision systems that determine who gets a loan or receives bail — didn’t just appear overnight. Yes, the relevance and impact of these technologies might feel novel and unfamiliar, but they have rich and detailed histories that go back centuries and span the globe, he said. And that’s exactly what How Data Happened: A History from the Age of Reason to the Age of Algorithms documents.
Coining a Term
Indeed, as its title suggests, Wiggins and Jones’s book encompasses a vast period.
The authors highlight a workshop that took place 67 years ago, at a time somewhere at the intersection of the two ages: when hitherto accepted reason was being scrutinized but the age of the algorithm had yet to decisively dawn.
The 1956 Dartmouth Summer Research Project on Artificial Intelligence, which ran over the course of several weeks, was organized by an array of brilliant mathematicians, computer scientists, and other academics. Described as an extended brainstorming session, it’s where the term artificial intelligence was first coined. Today, the workshop is widely reflected upon by scholars and practitioners as the inaugural event in the field.
Wiggins noted that one of the academics in attendance was Herb Simon, a pioneer in decision-making who was noteworthy for the intersectionality of his work: He was a political scientist by training, but his research and thinking had a profound impact on economics, computer science, and cognitive psychology. Simon’s influence in the field of data and AI was particularly remarkable in that he argued against just looking at the data itself, Wiggins said.
“Simon argued that we should be looking at schema” instead of data, he elaborated, “to understand the way we solve problems and then program machines to do so,” Wiggins said. “He helped keep the data-driven paradigm down.”
In their book, Wiggins and Jones acknowledge that they are concerned about personal privacy and, yes, the impact on democracy, too. But they also contend that it’s within our power to do something about these threats.
“We don’t have to use algorithmic decision systems, even in contexts where their use may be technically feasible,” they write. “Ads based on mass surveillance are not necessary elements of our society. We don’t need to build systems that learn the stratifications of the past and present and reinforce them in the future.”
And this is something Wiggins underscored in his talk at the CBS event. The last chapter of the book, he explained, is designed to instill hope and zoom out to understand all these developments in the context of the powers and dynamics at play: state power, corporate power, and people power, or private ordering.
“There's abundant literature on what I call ‘dumpster fire-ology,’ or listing things that are wrong about the internet,” Wiggins said, noting that Chapter 13 of How Data Happened aims to contribute not a list of problems but instead an analysis of possible solutions.
The authors offer ways to avert the dangers posed by a world in which data and AI can create massively skewed power dynamics that contribute to systemic inequalities. They also suggest ways of addressing the risk of AI amplifying the bias found in human discourse. Understanding the past is integral to staking out a successful path into the future. And Jones and Wiggins certainly help us do just that.
Watch Professor Hod Lipson, the director of Columbia University’s Creative Machines Lab, discuss the latest developments in robotics and AI: