Home Artificial Intelligence This Is Why Human-Centred AI Design Guidebooks Can Gracefully Fail When Utilized in Manufacturing

This Is Why Human-Centred AI Design Guidebooks Can Gracefully Fail When Utilized in Manufacturing

0
This Is Why Human-Centred AI Design Guidebooks Can Gracefully Fail When Utilized in Manufacturing

Photo by UX Indonesia on Unsplash

We see there may be growing attention to Human-centred AI (HAI) in various communities. The essential idea of HAI is to place humans and humanity on the centre of designing AI-powered applications. The HAI design seeks symbiosis of humans and AI — AI assisting human tasks reasonably than replacing them and humans improving AI by providing feedback.

Big tech corporations like Google, Microsoft, IBM, Apple, and other enterprises value the thought of HAI and have developed their very own HAI design methods and shared them in public as guidebooks. As an illustration, the People+AI guidebook from Google’s PAIR research team shows how one can organise and facilitate a series of workshops where participants with different domain expertise co-design the functionality and user interface of an AI application. It also provides a set of questions and guides necessary to be addressed in the course of the HAI design process, equivalent to “What’s the user value of the appliance?” and “How the prediction results must be explained to users?”. The guidebook further gives us various example use cases where HAI designs were applied in practice to encourage the design participants. Microsoft shares an HAI method called “HAX toolkits”. It offers design guides and workbooks in PowerPoint and Excel formats to serve an analogous purpose because the PAIR’s guidebook.

The essences of those HAI methods are alike; they permit multi-domain people to take part in the design process and facilitate capturing and remodeling the people’s needs into the appliance design by integrating the theories and practice of User Experience, Design Pondering and Responsible AI right into a unified design framework.

Photo by ThisisEngineering RAEng on Unsplash

Okay, now I’m talking about manufacturing :). It’s possible you’ll think manufacturing is a site that will be easily automatised, but in no way! Even at many modern factories, many expert individuals are working there and play necessary roles in developing, running, and improving factory operations. It’s, subsequently, vital to creating harmony between humans and machines.

So why not use HAI methods when integrating AI technology in manufacturing? That’s what we — a research team with expertise in AI, manufacturing, and UX — thought. We work with a big multinational manufacturer attempting to develop and implement a Machine Learning model for anomaly detection within the manufacturing processes. The model detects anomalous patterns of knowledge from sensory devices installed in critical pieces of producing equipment. A prototype of the model was there. We used the People+AI guidebook to assist the corporate’s AI project. This method was chosen since it appeared to be essentially the most comprehensive and well-structured one. We used this HAI method through a one-day workshop with about ten company members with various roles, equivalent to R&D engineers, process engineers, data scientists, technicians, and Lean Six Sigma experts.

So, what’s the results of using the tactic? Well, I’d not say it was a whole failure but not particularly successful. Overall, the tactic didn’t effectively address the complex and multifaceted challenges when designing an AI-powered application for an industrial process. The workshop facilitators (we) and the participants felt that that they had to cope with too many questions from different angles at the identical time, causing them cognitive overload and giving them a disorganised and confusing experience.

But we learned quite a bit from it! We became aware that the tactic was simply not well fit for the manufacturing context and that a big reconstruction of the tactic was mandatory to address the challenges we experienced. Considering the similarity among the many HAI methods, we imagine the result wouldn’t be much different if we had used other HAI methods.

Allow us to share our reflection on why the tactic gracefully failed with cognitive overload and confusion when utilized in manufacturing. Several aspects contributed to the failure, but in this text, I pick up three significant ones. I hope this text is enjoyable for those keen on using AI in industrial settings, whatever your expertise is.

1. Workflow design was not an integral a part of the tactic:

The present HAI methods from the tech corporations and other enterprises appear to be primarily geared toward assisting the design of applications utilized by a single user, equivalent to cell phone apps for consumers. In such use cases, the interaction between the human and machine typically occurs through the screen, fingers, eyes, and ears of the user. The guidebook seems to support well in designing such interaction by enabling designers to explore different user scenarios and experiences, find the best balance between automation and user control, manage expectations around AI capabilities, and so forth.

However, the context of AI service in an industrial facility will be quite complex. Allow us to imagine the case of using anomaly detection in a producing plant. This application shows the health status of the sensory devices through a monitor screen placed on the shop floor and sends an alarm when an anomaly is detected. First-hand users of the appliance are operators. After all, interactions between operators and the appliance are necessary, but things don’t end there. What should operators do or need to do once they receive the alarm? Does this person need to analyse the situation deeper by oneself with the assistance of the appliance? Or should she or he seek the advice of one’s supervisor or technician for further evaluation and decision-making? Should the equipment supplier be immediately contacted as a substitute? Does the suitable motion depend upon the seriousness of the anomaly? Does the motion depend upon the skill and knowledge of those individuals? What number of stakeholders have to be involved within the decision-making? What information must be available for them? How can information be shared amongst those actors?

As you see, in a producing context, an initial event — raising the alarm on this case — often triggers a posh chain of other actions potentially involving multiple individuals inside or outside of the organisation. Let’s call this chain of actions a workflow. We’ve learned that finger-eye-screen interactions are hardly designed without the design of the workflows. Thus, it’s crucial to contemplate these designs concurrently or not less than plan the workflow design earlier in the event process, as they’re closely interconnected.

Did the HAI method support this? No, not for the workflow design part. Through the workshop on the case company, the design participants were glad to create different paper prototypes of how anomaly status and other relevant information must be displayed on the shop floor. They, nonetheless, quickly became unsure of which prototypes could be suitable for the actual use, as that they had a limited understanding of how the workflows would unfold. An alarm on the shop floor is only one trigger of a workflow. There may very well be more scenarios triggering other workflows, equivalent to false negatives, false positives, sensor degradation, sensor upgrades, etc. Without proper methodological support, imagining all those scenarios and their corresponding workflows required significant cognitive effort from the participants.

Photo by Cristina Gottardi on Unsplash

2. The design guides and questions sparked a mess of additional inquiries:

As we discussed within the introduction of this text, the PAIR guidebook, like other HAI methods, offers a set of questions and guides that have to be considered in the course of the application design process. I can show some more examples here; “The best way to establish a correct level of trust in order that users is not going to put an excessive amount of or too little trust within the AI result?”, “How can the appliance accept feedback from the users to enhance the appliance’s behaviour?”.

These questions or guides are surely helpful for us to thoroughly tackle key design concerns within the design process. At the identical time, addressing these questions requires extensive what-if pondering, especially for AI-driven applications behaving probabilistically. The precise behaviours of the applications usually are not all the time clear during development. For easier interactions, equivalent to of cell phone apps, the what-if pondering should still be manageable. Within the workshop at the corporate, nonetheless, what-if pondering quickly snowballed to a level we couldn’t handle.

Firstly of the workshop, little was decided except the participants’ will to utilise the anomaly detection model within the operations. We followed the design process that the guidebook suggested, and the design guides and questions appeared to be helpful for the method. The workshop participants, nonetheless, became quickly unsure which questions and guides were more necessary than others and by which depth or detail the questions must be answered. Those questions were also tightly interrelated.

Consequently, answering those questions became plenty of guesswork. Let’s take certainly one of the design questions for example — how one can establish the users’ trust in the appliance. Many aspects can affect this, but not less than it depends upon how prediction results are presented to users. The design of the presentation will be affected by the model’s performance. The performance goes to be affected by the production-phase data that isn’t fully known in the course of the development. As we discussed earlier, the result presentation can be depending on the workflows.

As you see, a single design query causes a series of other interlinked questions which are hardly answered directly. A solution depends upon one other answer which also depends upon one other answer which will be only partially answered…no wonder why the participants quickly got puzzled and overwhelmed. A participant said in the long run, “okay, we all know now there is a large mountain ahead of us, but we’re still unsure how one can climb it.”

Photo by Luis Villasmil on Unsplash

3. The responsibility of consolidating the collected information was ambiguous:

The HAI methods facilitate design participants to generate a considerable amount of information mandatory to design an AI-powered application. The methods offer various tools, equivalent to ideation cards, design questions, guides, and workbooks, to help within the generation and documentation of this information.

But who will consolidate all those information? Through the workshop, it became apparent that the tactic was designed primarily from a UX designer’s perspective and that the designer appeared to be the one consolidating the knowledge and remodeling it into the design.

Okay, we understand that the phrase “Human-Centred AI” is emphasised within the HAI methods, but they’re much biased toward UX. This bias may not confuse people when the methods are used for an easier interaction, equivalent to a cell phone app. UX designers have wealthy experience in designing the functionality and interface of such an application.

But how about when the tactic is used for industrial processes where the workflow design is a critical and inseparable a part of the interaction design? In such a multifaceted use case, should a UX designer still consolidate the knowledge from the workshop? Or would a project leader with a broad and in-depth understanding of the economic processes be higher suited to the duty? We began the workshop with no clear understanding of this issue, which further complicated the workshop (that was already a large number!).

Finally, learning from the failure and moving on…

The three aspects were already sufficient to overwhelm the participants and create a state of cognitive overload and confusion. We simply entered the workshop with a premature understanding of the limitation of the HAI methods when applied in industrial processes. Although those methods provide us with a solid foundation, we found that a big modification could be mandatory to suit the manufacturing domain.

We’re currently developing a recent method based on our learning and testing it at corporations. We all know not less than that workflow design must be integrated into the tactic and that the tactic should effectively handle the flurry of interrelated what-if questions that arise in the course of the design process. Hopefully, we are able to report the lead to future!! :).

# This blog post is written along with my colleagues Kristian Sandström and Alvaro Aranda Munoz. Thanks!

LEAVE A REPLY

Please enter your comment!
Please enter your name here