Home Artificial Intelligence Juliette Powell & Art Kleiner, Authors of the The AI Dilemma – Interview Series

Juliette Powell & Art Kleiner, Authors of the The AI Dilemma – Interview Series

3
Juliette Powell & Art Kleiner, Authors of the The AI Dilemma – Interview Series

The AI Dilemma is written by Juliette Powell & Art Kleiner.

Juliette Powell is an writer, a television creator with 9,000 live shows under her belt, and a technologist and sociologist. She can be a commentator on Bloomberg TV/ Business News Networks and a speaker at conferences organized by the Economist and the International Finance Corporation. Her TED talk has 130K views on YouTube. Juliette identifies the patterns and practices of successful business leaders who bank on ethical AI and data to win. She is on faculty at NYU’s ITP where she teaches 4 courses, including Design Skills for Responsible Media, a course based on her book.

Art Kleiner is a author, editor and futurist. His books include , , , and . He was editor of strategy+business, the award-winning magazine published by PwC. Art can be a longstanding faculty member at NYU-ITP and IMA, where his courses include co-teaching Responsible Technology and the Way forward for Media.

The AI Dilemma” is a book that focuses on the risks of AI technology within the mistaken hands while still acknowledging the advantages AI offers to society.

Problems arise since the underlying technology is so complex that it becomes not possible for the tip user to actually understand the inner workings of a closed-box system.

One of the crucial significant issues highlighted is how the definition of responsible AI is all the time shifting, as societal values often do not stay consistent over time.

I quite enjoyed reading “The AI Dilemma”. It is a book that does not sensationalize the risks of AI or delve deeply into the potential pitfalls of Artificial General Intelligence (AGI). As an alternative, readers learn concerning the surprising ways our personal data is used without our knowledge, in addition to among the current limitations of AI and reasons for concern.

Below are some questions which can be designed to indicate our readers what they’ll expect from this ground breaking book.

What initially inspired you to write down “The AI Dilemma”?

Juliette went to Columbia partly to review the boundaries and possibilities of regulation of AI. She had heard firsthand from friends working on AI projects concerning the tension inherent in those projects. She got here to the conclusion that there was an AI dilemma, a much larger problem than self-regulation. She developed the Apex benchmark model — a model of how decisions about AI tended toward low responsibility due to interactions amongst firms and groups inside firms. That led to her dissertation.

Art had worked with Juliette on a variety of writing projects. He read her dissertation and said, “You might have a book here.” Juliette invited him to coauthor it. In working on it together, they found that they had very different perspectives but shared a robust view that this complex, highly dangerous AI phenomenon would have to be understood higher so that folks using it could act more responsibly and effectively.

One in all the elemental problems that’s highlighted in The AI Dilemma is the way it is currently not possible to know if an AI system is responsible or if it perpetuates social inequality by simply studying its source code. How big of an issue is that this?

The  problem just isn’t primarily with the source code. As Cathy O’Neil points out, when there is a closed-box system, it is not just the code. It is the sociotechnical system — the human and technological forces that shape each other — that should be explored. The logic that built and released the AI system involved identifying a purpose, identifying data, setting the priorities, creating models, organising guidelines and guardrails for machine learning, and deciding when and the way a human should intervene. That is the part that should be made transparent — at the least to observers and auditors. The chance of social inequality and other risks are much greater when these parts of the method are hidden. You’ll be able to’t really reengineer the design logic from the source code.

Can specializing in Explainable AI (XAI) ever address this?

To engineers, explainable AI is currently considered a gaggle of technological constraints and practices, geared toward making the models more transparent to people working on them. For somebody who’s being falsely accused, explainability has a complete different meaning and urgency. They need explainability to have the ability to beat back in their very own defense. All of us need explainability within the sense of constructing the business or government decisions underlying the models clear. A minimum of in the US, there’ll all the time be a tension between explainability — humanity’s right to know – and a company’s right to compete and innovate. Auditors and regulators need a special level of explainability. We go into this in additional detail in The AI Dilemma.

Are you able to briefly share your views on the importance of holding stakeholders (AI firms) accountable for the code that they release to the world?

To this point, for instance within the Tempe, AZ self-driving automotive collision that killed a pedestrian, the operator was held responsible. A person went to jail. Ultimately, nonetheless, it was an organizational failure.

When a bridge collapses, the mechanical engineer is held responsible. That’s because mechanical engineers are trained, continually retrained, and held accountable by their career. Computer engineers usually are not.

Should stakeholders, including AI firms, be trained and retrained to take higher decisions and have more responsibility?

The AI Dilemma focused quite a bit on how firms like Google and Meta can harvest and monetize our personal data. Could you share an example of great misuse of our data that needs to be on everyone’s radar?

From The AI Dilemma, page 67ff:

Recent cases of systematic personal data misuse proceed to emerge into public view, many involving covert use of facial recognition. In December 2022, MIT Technology Review published accounts of a longstanding iRobot practice. Roomba household robots record images and videos taken in volunteer beta-testers’ homes, which inevitably means gathering intimate personal and family-related images. These are shared, without testers’ awareness, with groups outside the country. In at the least one case, a picture of a person on a rest room was posted on Facebook. Meanwhile, in Iran, authorities have begun using data from facial recognition systems to trace and arrest women who usually are not wearing hijabs.16

There’s no must belabor these stories further. There are such a lot of of them. It will be significant, nonetheless, to discover the cumulative effect of living this manner. We lose our sense of getting control over our lives once we feel that our private information could be used against us, at any time, abruptly.

One dangerous concept that was brought up is how our entire world is designed to be frictionless, with the definition of friction being “any point in the client’s journey with an organization where they hit a snag that slows them down or causes dissatisfaction.” How does our expectation of a frictionless experience potentially result in dangerous AI?

In Recent Zealand, a Pak’n’Save savvy meal bot suggested a recipe that might create chlorine gas if used. This was promoted as a way for patrons to make use of up leftovers and get monetary savings.

Frictionlessness creates an illusion of control. It’s faster and easier to hearken to the app than to look up grandma’s recipe. People follow the trail of least resistance and don’t realize where it’s taking them.

Friction, against this, is creative. You become involved. This results in actual control. Actual control requires attention and work, and – within the case of AI – doing an prolonged cost-benefit evaluation.

With the illusion of control it seems like we live in a world where AI systems are prompting humans, as a substitute of humans remaining fully on top of things. What are some examples you can give of humans collectively believing they’ve control, when really, they’ve none?

San Francisco at once, with robotaxis. The thought of self-driving taxis tends to bring up two conflicting emotions: Excitement (“taxis at a much lower cost!”) and fear (“will they hit me?”) Thus, many regulators suggest that the cars get tested with people in them, who can manage the controls. Unfortunately, having humans on the alert, able to override systems in real-time, is probably not an excellent test of public safety. Overconfidence is a frequent dynamic with AI systems. The more autonomous the system, the more human operators are inclined to trust it and never pay full attention. We get bored watching over these technologies. When an accident is definitely about to occur, we don’t expect it and we frequently don’t react in time.

A variety of research went into this book, was there anything that surprised you?

One thing that basically surprised us was that folks world wide couldn’t agree on who should live and who should die in The Moral Machine’s simulation of a self-driving automotive collision. If we are able to’t agree on that, then it’s hard to assume that we could have unified global governance or universal standards for AI systems.

You each describe yourselves as entrepreneurs, how will what you learned and reported on influence your future efforts?

Our AI Advisory practice is oriented toward helping organizations grow responsibly with the technology. Lawyers, engineers, social scientists, and business thinkers are all stakeholders in the long run of AI. In our work, we bring all these perspectives together and practice creative friction to search out higher solutions. We’ve developed frameworks just like the calculus of intentional risk to assist navigate these issues.

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here