One year after the White House kicked off the American AI Initiative, its effects on research and development in the burgeoning field of artificial intelligence are just beginning to sink in.
And Michael Kratsios, the White House’s chief technology officer, says those effects are sure to be felt in Seattle — where industry leaders including Amazon and Microsoft, and leading research institutions including the University of Washington and the Allen Institute for Artificial Intelligence, are expanding the AI frontier.
This month, funding for AI emerged as one of the bright spots in a budget proposal that would reduce RD spending on other fronts. Kratsios said the White House conducted a “cross-cut” analysis of non-defense spending on AI research, and found that it amounted to nearly $1 billion.
“We made the big step of announcing, a couple of weeks ago, a doubling of AI RD over two years,” he told me this week in an interview marking the anniversary of the American AI Initiative.
The two-year plan foresees spending $2 billion annually on AI research by 2022. “We think this is a game-changer moment for AI RD in this country,” Kratsios said.
Seattle-area researchers are virtually certain to be in on the game. “People at the University of Washington and other institutions around the country will be able to apply and tap into those new funds,” Kratsios told me.
Efforts to bring clarity to the regulatory environment for AI applications should also help the Seattle area’s tech community, he said.
“There is nothing more discouraging for an innovator than having a great product, but having no concept about how, when, where and under what constraints the federal government is going to be regulating it,” Kratsios said. “With the regulatory principles … our innovators now have a clear vision of the way that the federal government is thinking about these products, and they can more confidently be innovating in this space than they were before.”
Those draft guidelines call for a “light-touch,” agency-by-agency approach to AI regulations. They also promote the development of trustworthy AI, and encourage public engagement in discussing the social issues surrounding AI. Speaking of public engagement, they’re open for public comment through March 13.
To mark the anniversary of the American AI Initiative’s kickoff, the White House Office of Science and Technology Policy today issued a “Year One Annual Report” on its progress. The report touches upon the funding outlook and the regulatory approach, as well as efforts to train an AI-ready workforce and work with U.S. allies to boost competitiveness on the AI frontier.
Kratsios addressed those same themes during this week’s interview. Here’s a sampling from the QA, edited for brevity and clarity:
GeekWire: Some people say that something new has to be created to deal with the challenges of the AI environment, while others say it’s more fitting to have the regulatory responsibility distributed across sector-specific agencies. Do you think there’s a sense that new institutions will need to be created to deal with the issues surrounding ethics and consumer protection?
Michael Kratsios: “No, our general approach is that artificial intelligence is a tool, just like many other tools that have come to pass over time in the United States. Each of our regulatory agencies has been equipped to deal with changes in the technology that underlies the tools, so our approach is one that is sector-specific and risk-based. We believe the expertise at the individual agencies will allow them to be best-equipped to make those very important decisions.”
Q: The American AI Initiative has put a priority on supporting AI as an “Industry of the Future” and promoting the training of an AI-ready workforce. One of the issues that I think we see, even in Seattle, relates to which kind of workforce is really engaged in the AI revolution. Those jobs tend to be highly technical, high-paid jobs requiring a lot of academic training — and there’s a concern that it may leave behind a large sector of the less technical workforce. How do you see the issue of workforce training playing out for people in less skilled professions right now?
A: “We believe that artificial intelligence is the tool that all types of workers will be able to harness to do their jobs better, safer, faster and more efficiently.
“One of the highlights we had in 2017 was an event we had here in the White House, in the East Room with the president on American leadership in emerging tech. We had a discussion about drone operations, and we had a quarry surveyor from Alabama come to the White House and meet the president. This was someone who was starting to have tough knee problems as he was traversing up and down these large piles of rocks doing his assessments. He was very close to needing to retire early, just from the physical limitations.
“His situation changed dramatically when he was able to use drone technologies that had some AI-powered characteristics to them. Now, rather than having to do this intense manual labor, he was able to pilot a drone to do even more accurate assessments of the quarry.
“This is an example of how AI technology can be affecting all types of work. Whether you’re a farmer in Iowa, or you’re doing resource extraction in Texas, or you’re doing pharmaceutical research in Boston, you’ll be using this technology. And you’ll need folks with skills at all levels to make the most of it.”
Q: There’s been a lot of talk about facial recognition technology and the ethical questions it raises. Microsoft, for example, has what it calls the Aether Committee, which has already ruled out some proposed applications for facial recognition. What sort of approach do you think will be most workable for the AI initiative you’re putting together?
A: “Generally, with regards to any technology, we do not believe that all-out bans are the way to approach technological innovation. We believe that the federal government can play a very important role in working on technical standards, doing better research and equipping the policymakers who want to use particular technologies — the tools they need to make sure that they’re being used in compliance with existing laws.
“There are certain use cases where facial recognition can be very helpful — for example, helping to find missing children. But there are other cases where we need to be more thoughtful about the privacy implications. We believe, again, in a sector-specific analysis of use cases that should be technically rigorous.”
Q: Is the model of having an industry-specific or company-specific AI ethics committee workable?
A: “Within companies, we’ve observed a good level of success. I’ve spoken with the people at Microsoft who have that group that’s helping with a lot of the decisions, and it’s a very robust discussion which includes a number of stakeholders. I think that can lead to better work.
“There’s a larger point that we brought up in our AI regulatory principles, about the need for public engagement — the idea that these decisions cannot and should not be made in silos, but rather should be made with the community. That’s something we deeply believe is the path forward for AI.”