By Tricia Martinez, Managing Director of the Techstars Industries of the Future Accelerator
This past week I had the privilege of sharing some questions with someone I know from my time at the Department of Energy, ORNL’s AI Program Director, David Womble, who oversees the laboratory's AI and machine learning strategy for high-performance computing (HPC); ensures ORNL's AI research impacts the Department of Energy's Office of Science mission; and conducts long-range program planning and project leadership.
Some background on David, in his more than three decades in computing, he has won two R&D100 awards and the Association for Computing Machinery’s Gordon Bell Prize, awarded each year “to recognize outstanding achievement in high-performance computing.”
Prior to joining ORNL, David spent 30 years at Sandia National Laboratories, where he served as a senior manager and program deputy for the Advanced Simulation and Computing program, which is responsible for developing and deploying modeling and simulation capabilities, including hardware and software, in support of Sandia’s nuclear weapons program. During his tenure at Sandia, he made numerous contributions across the computing spectrum including in HPC, numerical mathematics, linear solvers, scalable algorithms, and I/O while establishing the Computer Science Research Institute and leading Sandia’s seismic imaging project in DOE’s Advanced Computational Technologies Initiative.
To say he is impressive is an understatement! What I love about working with David is his curiosity, not only for Artificial Intelligence as a technology, but for the greater ethical implications it will have on society.
Artificial intelligence (AI) can be defined most simply as the process of automated (computer based) decision making based on data. Machine learning is the process of building computer models that can be trained using data. These models can take many forms, including for example, neural networks, decision trees, and clustering. Deep learning refers specifically to neural networks with more than three layers.
The question becomes more interesting when we consider that the decisions computers are making are those that are generally associated with human intelligence, such as driving vehicles, translating speech, and executing business transactions. This ability to store knowledge, even in a narrow sense, and query that knowledge is forcing us to rethink human intelligence and creativity.
In one sense, AI is evolutionary, not revolutionary. Our ability to collect and process data continuously increases, and businesses are taking advantage of these capabilities to design and optimize everything from social and financial products to manufacturing supply chains.
In another sense, AI is revolutionary. The decisions that AIs are making are those associated with human knowledge and intelligence. Over a period of centuries, we automated many manual labor-based jobs through the industrial revolutions; we are now automating many knowledge-based jobs.
But I also suspect the question is moot. The AI evolution/revolution is here and having a significant impact on our daily lives; we need to deal (hopefully proactively) with the many social, financial, political, and ethical issues that are raised.
There are many challenges, and it is hard to know where to begin. But I’ll make a few comments in three categories:
Technical. This receives most of the attention from researchers and is where most effort is devoted. If I had to identify one meta-challenge it would be “knowledge representation” to enable the transfer of knowledge and making correlations between disparate disciplines. A second meta-challenge would be robustness, resilience and dealing with unknowns and uncertainty in learning and decision-making.
Assurance and Ethics. This can be summarized to mean that an AI gets the right answer for the right reason and that this answer reflects the values of society. This is very difficult for several reasons, including 1) the abstract form of the model makes it difficult to explain or interpret an AI’s decision, 2) an AI can only learn about the world the way it is and for which data is available, which may not be the world that reflects our values, and 3) this can only be addressed by looking at the full “AI stack” including data, models, learning, and use/user. I am afraid that this cannot be entrusted to a single entity or industry and will require a regulatory framework that protects people and ensures the safety and integrity of AI-based systems, while still encouraging innovation.
Political, legal, and social. There are several challenges in this category, such as achieving a basic level of AI literacy, and a basic understanding of the limits of AI and when and when not to trust an AI. Another challenge is dealing with data; AI and ML depend on data, and the ability to process this data drives its value. We need a data framework that encourages innovation but does not defer to industry’s focus on profits. And we will need a sustained national investment in AI.
AI, ML, and other forms of advanced data analytics are already an intrinsic part of science, business, and society, so it is hard to identify a “greatest opportunity.” But in the near term, perhaps the most positive benefits will be in medicine and in our ability to optimize infrastructure. (Examples infrastructures would include transportation, energy, supply chain and manufacturing.) And perhaps the most disruptive uses of AI will be in surveillance, including both physical and social surveillance, and military systems.
Oak Ridge National Laboratory is making significant contributions to many of these areas. Work includes the research to design new drugs, enzymes, and materials, improve energy generation and distribution, develop additive manufacturing, and optimize the transportation infrastructure. This work is enabled by the lab’s world class science and computing facilities, including one of the world’s fastest supercomputers (Summit).
I am most intrigued by the ability of AI research to turn a mirror on ourselves. AI research to model the brain has the potential to help us understand, or even define, human intelligence. Also, if we accept that an AI is, in fact, capturing the world the way it is now, then we also must accept that the biases that show up in an AI exist in the world. Can AI become a tool to correct those biases?
Are you a Founder building deeptech? Are you a scientist Interested in Techstars Industries of the Future Accelerator? Sign up for Office Hours with me, subscribe to my blog, or reach out to me for support!
Tricia Martínez is the Managing Director of the Techstars Industries of the Future Accelerator. Tricia is an experienced serial entrepreneur, executive, and activist passionate about driving large-scale impact through technology and innovation. Tricia has earned titles including top 20 founders of color by Conscious Company Magazine, Hispanic Entrepreneur of the Year by USHCC, a top 100 FinTech Leader, among others. Tricia is also an alumna of the London Barclays Accelerator, powered by Techstars participating in the 2016 program with her blockchain-enabled financial services platform, Wala.