If consumers don’t believe that the AI tools they’re interacting with are respecting their privacy, will not be embedding bias and discrimination, that they’re not causing safety problems, then all of the marvelous possibilities really aren’t going to materialize. Nowhere is that more true than national security and law enforcement.Â
I’ll offer you an ideal example. Facial recognition technology is an area where there have been horrific, inappropriate uses: Take a grainy video from a convenience store and discover a Black man who has never even been in that state, who’s then arrested for against the law he didn’t commit. ( Wrongful arrests based on a very poor use of facial recognition technology—that has got to stop.Â
In stark contrast to that, when I’m going through security on the airport now, it takes your picture and compares it to your ID to be sure that that you just are the person you say you might be. That’s a really narrow, specific application that’s matching my image to my ID, and the sign tells me—and I do know from our DHS colleagues that this is admittedly the case—that they’re going to delete the image. That’s an efficient, responsible use of that type of automated technology. Appropriate, respectful, responsible—that’s where we’ve got to go.
Were you surprised on the AI safety bill getting vetoed in California?
I wasn’t. I followed the controversy, and I knew that there have been strong views on each side. I feel what was expressed, that I feel was accurate, by the opponents of that bill, is that it was simply impractical, since it was an expression of desire about how you can assess safety, but we actually just don’t know how you can do those things. Nobody knows. It’s not a secret—it’s a mystery.Â
To me, it really reminds us that while all we would like is to understand how protected, effective, and trustworthy a model is, we even have very limited capability to reply those questions. Those are literally very deep research questions, and an ideal example of the type of public R&D that now must be done at a much deeper level.
Let’s discuss talent. Much of the recent National Security Memorandum on AI was about how you can help the appropriate talent come from abroad to the US to work on AI. Do you think that we’re handling that in the appropriate way?
It’s a hugely vital issue. That is the last word American story, that individuals have come here throughout the centuries to construct this country, and it’s as true now in science and technology fields because it’s ever been. We’re living in a unique world. I got here here as a small child because my parents got here here within the early Nineteen Sixties from India, and in that period, there have been very limited opportunities [to emigrate to] many other parts of the world.Â