Home Artificial Intelligence A human-centric approach to adopting AI

A human-centric approach to adopting AI

3
A human-centric approach to adopting AI

So in a short time, I gave you examples of how AI has turn out to be pervasive and really autonomous across multiple industries. This can be a type of trend that I’m super enthusiastic about because I think this brings enormous opportunities for us to assist businesses across different industries to get more value out of this amazing technology.

Julie, your research focuses on that robotic side of AI, specifically constructing robots that work alongside humans in various fields like manufacturing, healthcare, and space exploration. How do you see robots helping with those dangerous and dirty jobs?

Yeah, that is right. So, I’m an AI researcher at MIT within the Computer Science & Artificial Intelligence Laboratory (CSAIL), and I run a robotics lab. The vision for my lab’s work is to make machines, these include robots. So computers turn out to be smarter, more able to collaborating with people where the intention is to have the opportunity to reinforce somewhat than replace human capability. And so we give attention to developing and deploying AI-enabled robots which are able to collaborating with people in physical environments, working alongside people in factories to assist construct planes and construct cars. We also work in intelligent decision support to support expert decision makers doing very, very difficult tasks, tasks that lots of us would never be good at regardless of how long we spent attempting to train up within the role. So, for instance, supporting nurses and doctors and running hospital units, supporting fighter pilots to do mission planning.

The vision here is to have the opportunity to maneuver out of this kind of prior paradigm. In robotics, you possibly can consider it as… I believe of it as kind of “era one” of robotics where we deployed robots, say in factories, but they were largely behind cages and we needed to very precisely structure the work for the robot. Then we have been capable of move into this next era where we are able to remove the cages around these robots they usually can maneuver in the identical environment more safely, do work in the identical environment outside of the cages in proximity to people. But ultimately, these systems are essentially staying out of the way in which of individuals and are thus limited in the worth that they’ll provide.

You see similar trends with AI, so with machine learning particularly. The ways that you just structure the environment for the machine usually are not necessarily physical ways the way in which you’d with a cage or with organising fixtures for a robot. However the strategy of collecting large amounts of information on a task or a process and developing, say a predictor from that or a decision-making system from that, really does require that while you deploy that system, the environments you are deploying it in look substantially similar, but usually are not out of distribution from the information that you have collected. And by and huge, machine learning and AI has previously been developed to resolve very specific tasks, to not do kind of the entire jobs of individuals, and to do those tasks in ways in which make it very difficult for these systems to work interdependently with people.

So the technologies my lab develops each on the robot side and on the AI side are aimed toward enabling high performance and tasks with robotics and AI, say increasing productivity, increasing quality of labor, while also enabling greater flexibility and greater engagement from human experts and human decision makers. That requires rethinking about how we draw inputs and leverage, how people structure the world for machines from these kind of prior paradigms involving collecting large amounts of information, involving fixturing and structuring the environment to actually developing systems which are rather more interactive and collaborative, enable individuals with domain expertise to have the opportunity to speak and translate their knowledge and data more on to and from machines. And that may be a very exciting direction.

It’s different than developing AI robotics to switch work that is being done by people. It’s really serious about the redesign of that work. That is something my colleague and collaborator at MIT, Ben Armstrong and I, we call positive-sum automation. So the way you shape technologies to have the opportunity to realize high productivity, quality, other traditional metrics while also realizing high flexibility and centering the human’s role as an element of that work process.

Yeah, Lan, that is really specific and in addition interesting and plays on what you were just talking about earlier, which is how clients are serious about manufacturing and AI with an excellent example about factories and in addition this concept that perhaps robots aren’t here for only one purpose. They may be multi-functional, but at the identical time they cannot do a human’s job. So how do you have a look at manufacturing and AI as these possibilities come toward us?

Sure, sure. I like what Julie was describing as a positive sum gain of this is precisely how we view the holistic impact of AI, robotics kind of technology in asset-heavy industries like manufacturing. So, although I’m not a deep robotic specialist like Julie, but I have been delving into this area more from an industry applications perspective because I personally was intrigued by the quantity of information that’s sitting around in what I call asset-heavy industries, the quantity of information in IoT devices, right? Sensors, machines, and in addition take into consideration every kind of information. Obviously, they usually are not the everyday sorts of IT data. Here we’re talking about a tremendous amount of operational technology, OT data, or in some cases also engineering technology, ET data, things like diagrams, piping diagrams and things like that. So initially, I believe from a knowledge standpoint, I believe there’s just an unlimited amount of value in these traditional industries, which is, I think, truly underutilized.

And I believe on the robotics and AI front, I definitely see the same patterns that Julie was describing. I believe using robots in multiple alternative ways on the factory shop floor, I believe that is how the various industries are leveraging technology in this type of underutilized space. For instance, using robots in dangerous settings to assist humans do these sorts of jobs more effectively. I at all times speak about one among the clients that we work with in Asia, they’re actually within the business of producing sanitary water. So in that case, glazing is definitely the strategy of applying a glazed slurry on the surface of shaped ceramics. It is a century-old type of thing, a technical thing that humans have been doing. But since precedent days, a brush was used and dangerous glazing processes may cause disease in staff.

Now, glazing application robots have taken over. These robots can spray the glaze with 3 times the efficiency of humans with 100% uniformity rate. It’s just one among the numerous, many examples on the shop floor in heavy manufacturing. Now robots are taking on what humans used to do. And robots and humans work together to make this safer for humans and at the identical time produce higher products for consumers. So, that is the type of exciting thing that I’m seeing how AI brings advantages, tangible advantages to the society, to human beings.

That is a very interesting type of shift into this next topic, which is how will we then speak about, as you mentioned, being responsible and having ethical AI, especially once we’re discussing making people’s jobs higher, safer, more consistent? After which how does this also play into responsible technology typically and the way we’re taking a look at all the field?

Yeah, that is an excellent hot topic. Okay, I might say as an AI practitioner, responsible AI has at all times been at the highest of the mind for us. But think in regards to the recent advancement in generative AI. I believe this topic is becoming much more urgent. So, while technical advancements in AI are very impressive like many examples I have been talking about, I believe responsible AI just isn’t purely a technical pursuit. It is also about how we use it, how each of us uses it as a consumer, as a business leader.

So at Accenture, our teams strive to design, construct, and deploy AI in a fashion that empowers employees and business and fairly impacts customers and society. I believe that responsible AI not only applies to us but can be on the core of how we help clients innovate. As they give the impression of being to scale their use of AI, they wish to be confident that their systems are going to perform reliably and as expected. A part of constructing that confidence, I think, is ensuring they’ve taken steps to avoid unintended consequences. Which means ensuring that there is not any bias of their data and models and that the information science team has the precise skills and processes in place to supply more responsible outputs. Plus, we also make sure that that there are governance structures for where and the way AI is applied, especially when AI systems are using decision-making that affects people’s life. So, there are numerous, many examples of that.

And I believe given the recent excitement around generative AI, this topic becomes much more essential, right? What we’re seeing within the industry is that this is becoming one among the primary questions that our clients ask us to assist them get generative AI ready. And just because there are newer risks, newer limitations being introduced due to the generative AI along with a few of the known or existing limitations up to now once we speak about predictive or prescriptive AI. For instance, misinformation. Your AI could, on this case, be producing very accurate results, but when the data generated or content generated by AI just isn’t aligned to human values, just isn’t aligned to your organization core values, then I do not think it’s working, right? It may very well be a really accurate model, but we also must listen to potential misinformation, misalignment. That is one example.

Second example is language toxicity. Again, in the normal or existing AI’s case, when AI just isn’t producing content, language of toxicity is less of a problem. But now that is becoming something that’s top of mind for a lot of business leaders, which implies responsible AI also must cover this latest set of a risk, potential limitations to handle language toxicity. So those are the couple thoughts I actually have on the responsible AI.

And Julie, you discussed how robots and humans can work together. So how do you consider changing the perception of the fields? How can ethical AI and even governance help researchers and never hinder them with all this great latest technology?

Yeah. I fully agree with Lan’s comments here and have spent quite a good amount of effort over the past few years on this topic. I recently spent three years as an associate dean at MIT, constructing out our latest cross-disciplinary program and social and ethical responsibilities of computing. This can be a program that has involved very deeply, nearly 10% of the school researchers at MIT, not only technologists, but social scientists, humanists, those from the business school. And what I’ve taken away is, initially, there is not any codified process or rule book or design guidance on tips on how to anticipate all the currently unknown unknowns. There is not any world during which a technologist or an engineer sits on their very own or discusses or goals to check possible futures with those throughout the same disciplinary background or other kind of homogeneity in background and is capable of foresee the implications for other groups and the broader implications of those technologies.

The primary query is, what are the precise inquiries to ask? After which the second query is, who has methods and insights to have the opportunity to bring to bear on this across disciplines? And that is what we have aimed to pioneer at MIT, is to actually bring this kind of embedded approach to drawing within the scholarship and insight from those in other fields in academia and people from outside of academia and convey that into our practice in engineering latest technologies.

And just to provide you a concrete example of how hard it’s to even just determine whether you are asking the precise query, for the technologies that we develop in my lab, we believed for a few years that the precise query was, how will we develop and shape technologies in order that it augments somewhat than replaces? And that is been the general public discourse about robots and AI taking people’s jobs. “What is going on to occur 10 years from now? What’s happening today?” with well-respected studies put out a couple of years ago that for each one robot you introduced right into a community, that community loses as much as six jobs.

So, what I learned through deep engagement with scholars from other disciplines here at MIT as an element of the Work of the Future task force is that that is actually not the precise query. So because it seems, you only take manufacturing for instance because there’s superb data there. In manufacturing broadly, just one in 10 firms have a single robot, and that is including the very large firms that make high use of robots like automotive and other fields. After which while you have a look at small and medium firms, those are 500 or fewer employees, there’s essentially no robots anywhere. And there is significant challenges in upgrading technology, bringing the most recent technologies into these firms. These firms represent 98% of all manufacturers within the US and are coming up on 40% to 50% of the manufacturing workforce within the U.S. There’s good data that the lagging, technological upgrading of those firms is a really serious competitiveness issue for these firms.

And so what I learned through this deep collaboration with colleagues from other disciplines at MIT and elsewhere is that the query is not “How will we address the issue we’re creating about robots or AI taking people’s jobs?” but “Are robots and the technologies we’re developing actually doing the job that we want them to do and why are they really not useful in these settings?”. And you have got these really exciting case stories of the few cases where these firms are capable of herald, implement and scale these technologies. They see an entire host of advantages. They do not lose jobs, they’re able to tackle more work, they’re capable of bring on more staff, those staff have higher wages, the firm is more productive. So how do you realize this kind of win-win-win situation and why is it that so few firms are capable of achieve that win-win-win situation?

There’s many various aspects. There’s organizational and policy aspects, but there are literally technological aspects as well that we now are really laser focused on within the lab in aiming to handle the way you enable those with the domain expertise, but not necessarily engineering or robotics or programming expertise to have the opportunity to program the system, program the duty somewhat than program the robot. It is a humbling experience for me to imagine I used to be asking the precise questions and fascinating on this research and really understand that the world is a rather more nuanced and complicated place and we’re capable of understand that a lot better through these collaborations across disciplines. And that comes back to directly shape the work we do and the impact we’ve on society.

And so we’ve a very exciting program at MIT training the subsequent generation of engineers to have the opportunity to speak across disciplines in this manner and the longer term generations shall be a lot better off for it than the training those of us engineers have received up to now.

Yeah, I believe Julie you brought such an excellent point, right? I believe it resonated so well with me. I do not think that is something that you just only see in academia’s type of setting, right? I believe this is precisely the type of change I’m seeing in industry too. I believe how the various roles inside the bogus intelligence space come together after which work in a highly collaborative type of way around this type of amazing technology, that is something that I’ll admit I’d never seen before. I believe up to now, AI gave the impression to be perceived as something that only a small group of deep researchers or deep scientists would have the opportunity to do, almost like, “Oh, that is something that they do within the lab.” I believe that is type of a variety of the perception from my clients. That is why so as to scale AI in enterprise settings has been an enormous challenge.

I believe with the recent advancement in foundational models, large language models, all these pre-trained models that enormous tech firms have been constructing, and clearly academic institutions are an enormous a part of this, I’m seeing more open innovation, a more open collaborative type of way of working within the enterprise setting too. I like what you described earlier. It is a multi-disciplinary type of thing, right? It isn’t like AI, you go to computer science, you get a sophisticated degree, then that is the only path to do AI. What we’re seeing also in business setting is people, leaders with multiple backgrounds, multiple disciplines throughout the organization come together is computer scientists, is AI engineers, is social scientists and even behavioral scientists who’re really, really good at defining different sorts of experimentation to play with this type of AI in early-stage statisticians. Because at the tip of the day, it’s about probability theory, economists, and in fact also engineers.

So even inside an organization setting within the industries, we’re seeing a more open type of attitude for everybody to return together to be around this type of amazing technology to all contribute. We at all times speak about a hub and spoke model. I actually think that this is occurring, and everybody is getting enthusiastic about technology, rolling up their sleeves and bringing their different backgrounds and skill sets to all contribute to this. And I believe this can be a critical change, a culture shift that we’ve seen within the business setting. That is why I’m so optimistic about this positive sum game that we talked about earlier, which is the final word impact of the technology.

That is a very great point. Julie, Lan mentioned it earlier, but in addition this access for everybody to a few of these technologies like generative AI and AI chatbots may help everyone construct latest ideas and explore and experiment. But how does it really help researchers construct and adopt those sorts of emerging AI technologies that everybody’s keeping an in depth eye on the horizon?

Yeah. Yeah. So, talking about generative AI, for the past 10 or 15 years, each yr I assumed I used to be working in probably the most exciting time possible on this field. After which it just happens again. For me the really interesting aspect, or one among the really interesting features, of generative AI and GPT and ChatGPT is, one, as you mentioned, it’s really within the hands of the general public to have the opportunity to interact with it and envision multitude of how it could potentially be useful. But from the work we have been doing in what we call positive-sum automation, that is around these sectors where performance matters rather a lot, reliability matters rather a lot. You concentrate on manufacturing, you consider aerospace, you consider healthcare. The introduction of automation, AI, robotics has indexed on that and at the price of flexibility. And so an element of our research agenda is aiming to realize the perfect of each those worlds.

The generative capability could be very interesting to me since it’s one other point on this space of high performance versus flexibility. This can be a capability that could be very, very flexible. That is the thought of coaching these foundation models and everybody can get a direct sense of that from interacting with it and fidgeting with it. This just isn’t a scenario anymore where we’re very fastidiously crafting the system to perform at very high capability on very, very specific tasks. It’s extremely flexible within the tasks you possibly can envision making use of it for. And that is game changing for AI, but on the flip side of that, the failure modes of the system are very difficult to predict.

So, for top stakes applications, you are never really developing the aptitude of performing some specific task in isolation. You are considering from a systems perspective and the way you bring the relative strengths and weaknesses of various components together for overall performance. The best way it’s essential to architect this capability inside a system could be very different than other types of AI or robotics or automation because you have got a capability that is very flexible now, but in addition unpredictable in how it’s going to perform. And so it’s essential to design the remainder of the system around that, or it’s essential to carve out the features or tasks where failure particularly modes usually are not critical.

So chatbots for instance, by and huge, for lots of their uses, they may be very helpful in driving engagement and that is of great profit for some products or some organizations. But having the ability to layer on this technology with other AI technologies that do not have these particular failure modes and layer them in with human oversight and supervision and engagement becomes really essential. So the way you architect the general system with this latest technology, with these very different characteristics I believe could be very exciting and really latest. And even on the research side, we’re just scratching the surface on tips on how to do this. There’s a variety of room for a study of best practices here particularly in these more high stakes application areas.

I believe Julie makes such an excellent point that is super resonating with me. I believe, again, at all times I’m just seeing the very same thing. I like the couple keywords that she was using, flexibility, positive-sum automation. I believe there are two colours I need so as to add there. I believe on the pliability frame, I believe this is precisely what we’re seeing. Flexibility through specialization, right? Used with the ability of generative AI. I believe one other term that got here to my mind is that this resilience, okay? So now AI becomes more specialized, right? AI and humans actually turn out to be more specialized. And in order that we are able to each give attention to things, little skills or roles, that we’re the perfect at.

In Accenture, we only in the near past published our perspective, “A latest era of generative AI for everyone.” Throughout the perspective, we laid out this, what I call the ACCAP framework. It principally addresses, I believe, similar points that Julie was talking about. So principally advice, create, code, after which automate, after which protect. If you happen to link all these five, the primary letter of those five words together is what I call the ACCAP framework (in order that I can remember those five things). But I believe that is how alternative ways we’re seeing how AI and humans working together manifest this type of collaboration in alternative ways.

For instance, advising, it’s pretty obvious with generative AI capabilities. I believe the chatbot example that Julie was talking about earlier. Now imagine every role, every knowledge employee’s role in a corporation could have this co-pilot, running behind the scenes. In a contact center’s case it may very well be, okay, now you are getting this generative AI doing auto summarization of the agent calls with customers at the tip of the calls. So the agent doesn’t must be spending time and doing this manually. After which customers will get happier because customer sentiment will improve detected by generative AI, creating obviously the various, even consumer-centric type of cases around how human creativity is getting unleashed.

And there is also business examples in marketing, in hyper-personalization, how this type of creativity by AI is being best utilized. I believe automating—again, we have been talking about robotics, right? So again, how robots and humans work together to take over a few of these mundane tasks. But even in generative AI’s case just isn’t even just the blue-collar type of jobs, more mundane tasks, also looking into more mundane routine tasks in knowledge employee spaces. I believe those are the couple examples that I take into consideration when I believe of the word flexibility through specialization.

And by doing so, latest roles are going to get created. From our perspective, we have been specializing in prompt engineering as a latest discipline throughout the AI space—AI ethics specialist. We also imagine that this role goes to take off in a short time simply due to the responsible AI topics that we just talked about.

And in addition because all this business processes have turn out to be more efficient, more optimized, we imagine that latest demand, not only the brand new roles, each company, no matter what industries you’re in, if you happen to turn out to be superb at mastering, harnessing the ability of this type of AI, the brand new demand goes to create it. Because now your products are improving, you’re capable of provide a greater experience to your customer, your pricing goes to get optimized. So I believe bringing this together is, which is my second point, it will bring positive sum to the society in economics type of terms where we’re talking about this. Now you are pushing out the production possibility frontier for the society as an entire.

So, I’m very optimistic about all these amazing features of flexibility, resilience, specialization, and in addition generating more economic profit, economic growth for the society aspect of AI. So long as we walk into this with eyes wide open in order that we understand a few of the existing limitations, I’m sure we are able to do each of them.

And Julie, Lan just laid out this unbelievable, really a correlation of generative AI in addition to what’s possible in the longer term. What are you serious about artificial intelligence and the opportunities in the subsequent three to 5 years?

Yeah. Yeah. So, I believe Lan and I are very largely on the identical page on nearly all of those topics, which is de facto great to listen to from the tutorial and the industry side. Sometimes it could possibly feel as if the emergence of those technologies is just going to kind of steamroll and work and jobs are going to vary in some predetermined way since the technology now exists. But we all know from the research that the information doesn’t bear that out actually. There’s many, many choices you make in the way you design, implement, and deploy, and even make the business case for these technologies that may really kind of change the course of what you see on the earth due to them. And for me, I actually think rather a lot about this query of what is called lights out in manufacturing, like lights out operation where there’s this concept that with the advances and all these capabilities, you’d aim to have the opportunity to run every thing without people in any respect. So, you do not need lights on for the people.

And again, as an element of the Work of the Future task force and the research that we have done visiting firms, manufacturers, OEMs, suppliers, large international or multinational firms in addition to small and medium firms internationally, the research team asked this query of, “So these high performers which are adopting latest technologies and doing well with it, where is all this headed? Is that this headed towards a lights out factory for you?” And there have been a wide range of answers. So some people did say, “Yes, we’re aiming for a lights out factory,” but actually many said no, that that was not the tip goal. And one among the quotes, one among the interviewees stopped while giving a tour and turned around and said, “A lights out factory. Why would I need a lights out factory? A factory without people is a factory that is not innovating.”

I believe that is the core for me, the core point of this. Once we deploy robots, are we caging and kind of locking the people out of that process? Once we deploy AI, is basically the infrastructure and data curation process so intensive that it really locks out the flexibility for a site expert to are available in and understand the method and have the opportunity to have interaction and innovate? And so for me, I believe probably the most exciting research directions are those that enable us to pursue this kind of human-centered approach to adoption and deployment of the technology and that enable people to drive this innovation process. So a factory, there is a well-defined productivity curve. You aren’t getting your assembly process while you start. That is true in any job or any field. You never get it exactly right otherwise you optimize it to start out, however it’s a really human process to enhance. And the way will we develop these technologies such that we’re maximally leveraging our human capability to innovate and improve how we do our work?

My view is that by and huge, the technologies we’ve today are really not designed to support that they usually really impede that process in numerous alternative ways. But you do see increasing investment and exciting capabilities during which you possibly can engage people on this human-centered process and see all the advantages from that. And so for me, on the technology side and shaping and developing latest technologies, I’m most excited in regards to the technologies that enable that capability.

Excellent. Julie and Lan, thanks a lot for joining us today on what’s been a very unbelievable episode of The Business Lab.

Thanks a lot for having us.

Thanks.

That was Lan Guan of Accenture and Julie Shah of MIT who I spoke with from Cambridge, Massachusetts, the house of MIT and MIT Technology Review overlooking the Charles River.

That is it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 on the Massachusetts Institute of Technology. You’ll find us in print, on the net, and at events every year all over the world. For more details about us and the show, please take a look at our website at technologyreview.com.

This show is accessible wherever you get your podcasts. If you happen to enjoyed this episode, we hope you may take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Giro Studios. Thanks for listening.

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here