UX in the Laboratory: 10 Key Lessons from Designing LabTwin, an AI Voice Assistant

January 22, 2024

At LabTwin, we're at the forefront of integrating AI and voice technology into drug development, navigating challenges and opportunities. In this article, I (at least try to 😅) share ten lessons from my journey designing LabTwin, a cutting-edge AI voice assistant for lab scientists. Reflecting on my work in 2023, these ten lessons are becoming foundational to my approach. They continually remind me that at the core of UX Design, especially in a field as advanced as AI and voice technology, the primary focus should always be on effectively addressing the needs of our users. My experience has profoundly emphasized user-centric Design's importance and collaborative innovation's power. It's about striking the right balance between technological possibilities and practical user applications.

 Solve Real Problems with AI and Voice Technology

Understanding the problem1

Understanding the Problem (always) — Generated by my prompts in DALL·E 3

Voice and AI are transformative technologies, but like every Product, their value lies in solving real problems. The excitement surrounding these advanced technologies can sometimes lead to a focus on capability over practicality. It's essential to measure the success of a technology by its effectiveness in addressing real-world challenges and improving user experience. The greater the potential of a technology, the easier it is to fall into the "Let's build this awesome feature" trap. This trend, where technology development overshadows user needs, is a common pitfall in the industry (Who said PMs? 😉).

Such a scenario highlights the importance of user-centric Design methodologies, which advocate prioritizing the user's problems and ensuring that technological advancements genuinely enhance user value. By continually asking, "What problem are we solving for the user?" and "How does this technology improve the user's life or work?" we can ensure that the development of AI and voice technologies remains grounded in enhancing user value.

For example, I suggest regularly checking if the problems you think are the most important are also the same ones for the Sales team, Account management, or Customer support teams. For example, our Product managers are building & curating a Jira board of feature requests and enhancements. It gives us the exact problem definition and user evidence when discussing different customers or the next features we should address or develop. Even if this process is not perfect (like every process 😄) this is a great way to focus on the “real problems.”

Further Reading: "The Design of Everyday Things" by Don Norman.

Accurately Gauge and Address Lab Scientists' Problems

estiating user

(Over/Under) Estimating User Problems — Generated by my prompts in DALL·E 3

Sometimes, we over or underestimate user problems. We often make mistakes like this, as correctly judging the problems' scale is tricky. It is incredibly challenging in niche B2B products when you are working with a limited amount of users, and/or talking with them is complicated because of the nature of their jobs. Moreover, there is always the risk of only listening to what you want to hear👂 because you already want to build "feature X or Y."

Use frameworks like the Lean UX Canvas to present your user problems to the rest of the team. Be bold and collect input from Customer support, Sales, PMs, and other Designers! It is essential so you can be challenged as much as possible. The Lean UX approach will further validate your assumptions and hypotheses by focusing on rapid concept prototyping and feedback. Ultimately, you should build features that will make your users spend more time in your app or attract more users. It is as "simple" as that. It's like being a gourmet chef in a kitchen full of gadgets. Sure, the latest tech is flashy, but if it doesn't help you make a better coq au vin 🐔🍷, what's the point, right?

For example: In the case of LabTwin, the initial perception was that an advanced scientific speech-to-text system would suffice to be on par with user expecations. However, it soon became clear that even a single error in voice transcription could significantly undermine a scientist’s trust in the product, given the precision required in their work.

Essential Tools We Used for Enhancing Voice-to-Text Accuracy:

  1. Targeted User Feedback: Gathering specific feedback from scientists about instances where the voice-to-text system faltered, particularly in understanding complex terminologies like compound names.
  2. Focused Prototype Testing: Testing improved prototypes directly in lab environments to assess accuracy under real-world conditions.
  3. AI and Machine Learning Enhancement: Implementing AI algorithms and custom customer-specific rules to grasp better and accurately transcribe specialized scientific language 🧪.

Through these streamlined approaches, LabTwin was able to significantly refine its speech-to-text accuracy significantly, crucially maintaining the trust of its users by minimizing errors and ensuring reliability in their critical work environments. It demonstrates the importance of precision and the continuous need to align technology closely with its users' specific, high-stakes needs 💁.

Further Reading: "Lean UX: Applying Lean Principles to Improve User Experience" by Jeff Gothelf and Josh Seiden.

Empower All Team Members to Contribute to User Research

research

Research as a Collective Responsibility— Generated by my prompts in DALL·E 3

Cultivating a research-oriented culture is crucial, involving everyone from Engineering to customer support in user research. This approach, fostering empathy and comprehensive insights, echoes participatory Design principles, where Stakeholder involvement ensures tailored solutions. Related to building an excellent research culture in the company is empowering everyone in your organization to do research. Everyone interested in knowing more about your users should do or participate regularly in user interviews 🎙️ and/or user testing.

It is crucial for us as sometimes we are looking to help scientists solve use cases specific to their role and/or their company, so the more empathy the team has, the better it is to build a best-in-class product for them. Engineers, Customer support, Sales, Designers, and Product managers often visit and observe our users on-site. It brings a lot to the table when it is time to challenge each other's work and build the best experience for our users. Involving everyone in user research is like assembling a puzzle 🧩 — whether it's from Engineering or Customer Support, each piece helps complete the picture.

For example, during customer onsites simply shadowing a scientist for a part of their day can provide a deep understanding of the user’s daily tasks and how the product fits into their workflow. This passive observation technique can reveal unspoken hurdles and needs. Also, what was extremly important for us was documenting environmental factors in the lab that may affect the product’s use, such as contamination, noise levels, or the physical layout. These observations can be crucial for understanding the context in which the product is used. Post-visit, it’s vital to have a structured debrief session where team members share their observations and insights. This information should be documented and made accessible to the entire Product, Design and Engineering team. Don’t be shy and present those insights! 👩‍🏫

Suggested Article: "The Value of Participatory Design in Research" in UX Magazine.

Bridge UX Design and Engineering in Voice AI

aliging

Aligning with Engineering —Generated by my prompts in DALL·E 3

Understanding the limitations of voice and AI technologies is vital. Don't let the PM do that alone. You need to be able to juggle between tech limitations and user problems to provide the best solution for the users. You must understand the tech limitations to offer great UX in cutting-edge technologies.

What is feasible, and what is technically achievable? Refrain from assuming that what Google or Alexa did will be easy to replicate, even if they have been doing it for quite some time. Open AI API has made some of our old wild dreams come true in recent months. Still, it is essential to remember that the "FAANG teams 💪" (even if sometimes there is a lot of hate about what they are doing) are extensive and have excellent knowledge & experience in working with AI, NLP, or Voice UX, for example.

For example we’ve instituted a practice where our Designers actively participate in technical feasibility and grooming sessions alongside our Engineers. This integration has been instrumental in fostering a deep mutual understanding between the Design and Engineering teams. Our Designers gain valuable insights into technical constraints and possibilities, informing their creative process with a practical perspective. Conversely, Engineers develop a greater appreciation for Design objectives and user experience considerations. This collaborative approach ensures that our designs are not only user-centric but also technically feasible, leading to a more streamlined development process and a product that truly resonates with both its functionality and user experience.

Further Reading: "Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days" by Jake Knapp.

Set Clear, Focused Goals for Each Product Feature

set clear

Defining the Main Goal(s) — Generated by my prompts in DALL·E 3

Focusing on specific needs and contexts is essential for each feature. Setting SMART goals ensures each feature has a clear, measurable, attainable, relevant, and time-bound objective. I have been in a few discussions when the user problem is roughly framed, but the main goal differs or needs clarification. If you don't prepare and communicate your Problem and goals correctly, it will 100% lead to long feedback cycles and delays in delivery and, thus, frustration.

Let's say you want to build a feature for Scientists to set timers with voice because they can't interact with their physical timers ⏲ in the lab with gloves. What is the goal here? For example, it is essential to know which users you want to address and where it is mostly happening. Do you want to take care of recurring timers, etc.… Write a well-written goal 🎯 based on a suitable framework tailored to your Product and stick to it. And be careful; it doesn't mean you will not have to change it if you realize you were wrong during the iteration process!

For example in the feature stated earlier it could look like this:
The primary goal is to provide a seamless, hands-free experience for setting, monitoring, and altering timers through intuitive voice commands. The solution will focus on accommodating the diverse timing needs of different experiments, including support for single-use and recurring timers. By deeply understanding the specific contexts and scenarios in which lab scientists operate, the feature will be optimized for accuracy, ease of use, and reliability, ensuring minimal disruption to the workflow and maximizing lab safety.

Further Reading: "SMART Goals: How to Make Your Goals Achievable" on MindTools.

Foster Collaborative UX Design with Stakeholders' Involvement

foster

Involving Stakeholders in UX Design — Generated by my prompts in DALL·E 3

Involving Stakeholders in the UX Design process ensures big-picture feedback. Sharing detailed designs asynchronously for feedback aligns with agile methodologies, emphasizing continuous collaboration. At every step of the Design process, I am doing 1:1s, quick alignments with the team, or sharing well-detailed and described Figma boards or prototypes📱 so you can collect feedback and decide what to do with it without organizing long Design review meetings.

Be mindful during Design reviews and avoid having the most opinionated Stakeholder drive the discussion/decision as much as possible. Let's not forget this saying: "The loudest one in the room is the weakest one in the room." For everyone to feel helpful in the feedback process, you need to be crystal clear about what you want to get feedback on. "Do you think about an edge case missing?". "Will it solve the main user problem?". "Do you understand how to use this feature?"…❓

For example, in our Weekly Design Review meetings with Product Managers at LabTwin, we focus on fostering constructive dialogue by integrating specific feedback techniques. First, we emphasize specific, actionable feedback; participants are encouraged to pinpoint precise areas of improvement and propose clear, actionable solutions. For instance, rather than a vague comment like “this interface isn’t intuitive,” we look for targeted suggestions such as “relocating the navigation bar could enhance user accessibility.” Next, we follow a structured approach of identifying issues first before proposing solutions, ensuring that feedback is not just a series of reactive comments but a thoughtful analysis of the Design’s effectiveness. Finally, each session culminates with a clear action plan. This approach ensures that our discussions are not only productive in the moment but also lead to tangible enhancements in our Design process.

Further Reading: "Agile Experience Design: A Digital Designer's Guide to Agile, Lean, and Continuous" by Lindsay Ratcliffe and Marc McNeill.

Enhance Innovation through Co-Design and Co-Research

ehnace

Co-Design and Co-Research — Generated by my prompts in DALL·E 3

Collaboration extends beyond mere participation. Co-Designing and co-Researching alongside Researchers and Designers from the same team is fundamental. Creating a safe environment to challenge ideas and discuss multiple options is critical for innovation. This collaborative 🤝 approach ensures that designs and insights are rigorously vetted and refined, leading to more robust and user-centric solutions.

When you work with another UX Designer, you can create more copy, Miro boards, UX, and UI variations and speed up the "finding the right concept" process for complex projects. It is a fun way to work together, and it prevents us from "being stuck" by looking at the same thing over and over. The co-design process is like a brainstorming session where no idea is too wild — sometimes, the "crazy" 🤪 ideas are the ones that lead to breakthroughs. On top of that, it will avoid a "half-baked" solution to getting in front of a PM, Engineer, User, or other Stakeholder. Remember, you want to pay attention to details that could distract attention from others to give meaningful feedback 💬.

For example, during our weekly Research & UX alignment sessions, the collaborative dynamics among the UX Researcher, Senior Product Designer, and Head of UX Research and Design become particularly evident and beneficial. The UX Researcher brings valuable insights into user behaviors and needs, ensuring that our designs are rooted in real user data. This research feeds into the creative prowess of the Senior Product Designer, who leverages these insights to craft intuitive and visually appealing design solutions. Meanwhile, the Head of UX Research and Design oversees the integration of these elements, aligning them with the broader strategic objectives of our product. This synergy not only enriches the creativity and problem-solving within the team but also reinforces a holistic approach where research informs design and vice versa. The result is a product development process that is innovative, user-centric, and highly responsive to the evolving needs of our users, ensuring that every Design decision we make is both grounded in reality and aligned with our strategic vision.

Further Reading: "Convivial Toolbox: Generative Research for the Front End of Design" by Liz Sanders and Pieter Jan Stappers

Strengthen Teamwork Across Design, PMs, and Engineers

strenthn

Collaborative Work with PMs and Engineers — Generated by my prompts in DALL·E 3

Effective collaboration with PMs and Engineers is essential. Embracing Design thinking in cross-functional teams leads to more innovative, user-centered solutions central to integrated Product teams. A design and/or research team working as a "service team 💁" separated from Engineers and PMs is a lousy team. You must have overlaps, discussions, and workshops with Engineers and PMs. The Design Thinking process is about solving problems together in cross-functional teams.

The more decisions you make alone about the implementation details, user flows, etc., the less involved the other parties will be. At the end of the day, it needs to be and feel like teamwork. And remember, there is no such thing as "fighting against the PM or Engineer" about who is writing a ticket or acceptance criteria. You only need to fight 🥊 for the users and keep the team focused on efficiently solving the most significant user problems.

For example, we are constantly refining our product development processes to enhance cross-functional collaboration and efficiency. Recognizing the challenge of differing priorities among team members, we have established a shared vision right at the outset of each project. This is achieved through alignment sessions where objectives and key results (OKRs) are defined collectively, ensuring that everyone’s contributions are focused towards common goals. Additionally, to bridge knowledge gaps, we have embedded a culture of continuous learning. Knowledge-sharing sessions called “Masterclasses” are held where team members from different functions share insights about their areas of expertise, fostering a deeper understanding and appreciation of each other’s roles. Furthermore, like I said earlier our Design team are actively involved in most Engineering ceremonies. This practice ensures that design decisions are well-informed by technical insights and vice versa, effectively preventing any knowledge gaps from arising. These strategies have significantly improved our agility and ability to adapt, ensuring that we continue to innovate and enhance its product in alignment with both user needs and technological advancements.

Further Reading: "Why Cross-Functional Teams Build Better Software"

Effectively Present & Test Concepts with Users and Stakeholders

effectively

Showcasing Concepts to Users and Stakeholders — Generated by my prompts in DALL·E 3

Presenting concepts to users for feedback is critical. It is like unveiling a new invention — you're not sure if it will get applause or confused stares, but it's always thrilling 🤩. Make sure to use various testing methods, including usability testing, to ensure that the final features align with user expectations and needs. It helps to find the right balance so the feedback remains at the concept level and does not become too detail-oriented. Still, I have heard many PMs or other Stakeholders saying in my career, "We are not ready to show the new "Feature X."

The result is waiting until the last moment to get negative feedback from users or Stakeholders when the feature is almost done (Yes, it is now too late to change meaningful things). Being not confident 😒 about showing something at any stage of the process is often a sign of a lack of documentation, cross-functional collaboration, user problem evidence, and, thus, a lack of confidence about the solution's impact.

For example we’ve adopted an approach that underscores the importance of early and frequent user testing, while also making it an engaging experience for all involved. Our Senior Product Designer & Product Manager once organized a fun user testing session where Engineers used our app to mix cocktails. This inventive scenario turned a standard testing exercise into an enjoyable and enlightening experience, providing valuable insights into the app’s usability in a creative and relaxed setting. This approach not only yielded useful feedback but also helped bridge the gap between Engineers and end-users, fostering a sense of empathy and a deeper understanding of user needs.

Further Reading: "Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems" by Steve Krug.

Track and Measure the Success of Features and Products

track and measure

Tracking Feature/Product Success — Generated by my prompts in DALL·E 3

Maintaining a database of user feedback for feature prioritization is vital. This data-driven Design approach ensures continuous improvement based on user insights and data. The UX Research team logs and curates user insights in a research database. We can keep track of the most frequent user feedback and requests 🔝 and help the Product team prioritize them.

Our research database is indexed regularly by features, requests, and user problems, allowing us to return to specific content when working on a new feature. Instead of restarting from scratch, we can use bits and pieces of old research to get started faster when we pick up new tasks.

For example we emphasize as much as possible the balance between qualitative and quantitative data to gauge user satisfaction and feature effectiveness. Qualitative insights, gathered through user interviews, surveys, and feedback forms, provide us with a deep understanding of user experiences, opinions, and feelings. This type of data offers context and depth to the quantitative metrics, such as user engagement rates and task completion times, revealing not just how users interact with our Product, but also why they interact in that way. Integrating these two types of data is crucial in the decision-making process. Utilizing tools like Dovetail, we efficiently collate, analyze, and interpret qualitative data, seamlessly merging it with quantitative metrics. This (complicated) comprehensive approach enables us to make informed decisions that are deeply rooted in a nuanced understanding of our users’ needs and experiences, ensuring that our Product development is both user-centric and data-driven.

Further Reading: "Measuring the User Experience: CollectingDesignyzing, and Presenting Usability Metrics" by William Albert and Thomas Tullis.

In developing LabTwin, we've focused on ensuring that our Design meets the real needs of scientists using AI and voice technology. It's not just about using the latest technology in labs; it's about making everything more accessible and more natural for the users.

We've learned a lot along the way. These lessons 📚 remind us that our job is more than making new assistsology. We are here to create tools that help scientists in their daily work. The success of LabTwin can be seen in how it simplifies complex tasks and helps in scientific research for our customers.

As we move forward ⏩, we are excited to face new challenges and seize new opportunities. This moment is not the end of the LabTwin journey but a point where we continue to grow and bring new ideas into the ever-changing world of user experience in lab technology.

Reach out to our digitalization experts to discuss your own use cases and digitalization goals, or simply book a demo in our calendar.

Special shoutout to Jackie and Julia — your dedication and hard work are the backbone of our success. And to Mitch for his invaluable feedback for writing this piece.

 

 

stay updated

Subscribe to our newsletter.

Ellipse 136-2
Ellipse 52 (Stroke) (1)-1