In our Team Talk series, members of the LabTwin team talk about our product development process, the problems our customers face, and solutions to those problems (both from LabTwin and from other companies). To kick off, Magdalena Paluch, our CEO explains why voice is central to everything we do at LabTwin.
We speak over three times faster than we can type, while text to speech technology and artificial intelligence capabilities continue to improve rapidly. As Bradley Metrock writes in his recent post on HBR, “Much like the web back in the ’90s, voice represents a vast blue ocean of possibilities and potential”, however the key to a successful implementation is user and context-centricity.
As Michal Levin states "Greater benefit would come from people getting the right thing, at the right time, on the best (available) device". At LabTwin, we believe that voice is the key to scientists getting the right information and digital assistance at the right time. Voice interaction has expanded beyond smart speakers to inhabit smart displays, projectors, and other mobile interfaces. Voice lets us connect to information at times we cannot look at a screen. For scientists, this might be when they are performing a task in the lab such as working at a tissue culture hood or a microscope, pipetting, dissecting or performing surgery.
Voice is a powerful tool because it works as both an input and output medium. Voice as input is about controlling software or hardware just by saying something, which is useful in cases when scientists' hands or/and eyes are preoccupied with a task. Voice as output is about consuming information just by listening, which is again useful while doing something else. Voice input and output can work exceptionally well together in various lab contexts, but they can also be completely separated. At LabTwin, we understand that scientists are not always comfortable speaking to their phone while in a lab with many other scientists, but they could listen and potentially see information while controlling the device with their gestures or hands. In other cases, scientists might want to tell LabTwin mobile to do something while in the lab, and then consume the content later at their desktop.
The LabTwin team is always thinking about user experience and striving to design experiences that are consistent across devices and continuous when moving from one device to another. We believe in ecosystems and work to connect the voice interface with complementary modalities depending on the context - at the moment this means mostly different wet labs and office desks.
It is critical for us to take into account that contexts can change while scientists are interacting with LabTwin. Scientists can leverage voice while working on experiments at the bench to record observations, or annotate changes to their protocols, but LabTwin can also be useful later when they're reflecting on what worked and didn't work in an experiment. The first step of information collection (e.g., annotating changes of values in a protocol, noting observations while looking through a microscope) works perfectly via voice on a mobile while at the bench. But after completing an experiment, scientists need to consume additional information in a different context while writing summaries and working on the desktop in their office. Most tools in labs lack the functionality to move content across devices, and the pieces of information are fragmented across different tools. LabTwin, on the other hand, allows scientists to integrate voice notes and written documentation seamlessly.
While LabTwin is a 'voice-first' technology, it isn't 'voice-only'. Delivering information through both images and audio provides flexibility for the user. It is also essential to recognize that visual isn't just pictures. It might be an image that complements a voice interaction, or it could be simply color or light that provides the user with an important signal. We are building LabTwin in a way that our multimodal mix is influenced by scientists' workflow and the devices with which scientists interact.
We strongly believe in giving users the choice of how they want to interact with LabTwin, how and where and what inputs they want to use. We are building our system to become well connected and embedded into the natural lab landscape, but at the same time equip scientists with flexibility and accessibility to the critical information they may need as an input or output. LabTwin is mobile - it stays with the scientist. It's the scientist's digital twin, granting scientist access to information or instruments from anywhere in the lab. LabTwin further reduces the cognitive load on scientists by tracking timers, scheduling events, ordering stock and more, all while working completely hands-free. As our Head of Sales, John Egerton puts it: "The gloves stay on, the research continues."