Engineering Ingenuity: Building a Virtual Assistant Framework
In the first part of our Engineering Ingenuity blog series, we share the journey that brought LEXIA, our virtual assistant framework, to life!
Engineering Ingenuity is a series of blog posts where we share some of the most innovative projects that have been internally developed at Critical Software, and the journey that led us to them. From virtual assistants to complex simulation systems, test environments to smart houses, there’s plenty to learn about!
We live in a world where smartphones and tablets take up a considerable chunk of people’s daily lives. But did you know that the usage of these devices can be summed up in just three types of apps? That’s right! Texting and posting apps, search-based apps, and email apps are the three most used app types and likely why mobile devices are used so avidly!
Short and conversational text is now the form of interaction that is most engaged in by customers, and it also happens to be one of the key challenges companies face nowadays. Users expect an easy way of interacting with systems and corporations, and organisations must acknowledge and adapt to this new reality.
Conversational interfaces, such as chatbots and virtual assistants, have become common and companies are now using them to automate business processes. However, these two technologies can often be confused, so let’s clarify the difference. Chatbots are tools for acquiring information; the customer interacts with a chatbot to resolve any questions or concerns regarding products or services. Virtual assistants, on the other hand, are software-based agents assisting customers in performing specific tasks, interacting with them in a more effective and human-like way.
We wanted to help organisations to manage this new technology by providing them with an innovative solution: LEXIA, a virtual intelligent assistant framework with learning and cognitive capabilities. LEXIA interacts with customers by leveraging natural language and short messages, all the while being intelligent and proactive in its responses.
For example, LEXIA can help customers explore and find information about a company by integrating third-party applications through APIs, allowing users to easily perform commands and transactions with the company. It can also provide customers with alerts and insights about new products that fit their profile, as well as monitor specific scenarios.
Learn how LEXIA works
We built LEXIA so that it could adapt to different needs and be as versatile as possible. LEXIA has five major modules which can be toggled on or off depending on the situation.
The first module, and the only one that cannot be switched off, is the natural language understanding (NLU) engine. This is the core of the system and allows LEXIA to understand incoming messages. The second module is LEXIA’s knowledge base where all the data and information regarding the organisation and its customers is stored. Thirdly, the reasoning engine provides LEXIA’s inference capabilities, useful in maintaining positive interactions with users and providing the right answers. The fourth module is the learning engine allowing LEXIA to learn from the previous user interactions and adapt to specific situation, for example by suggesting which action to take next. Finally, the fifth module is the assistant’s proactive module, enabling it to initiate an interaction with the user without any explicit request.
All these components mean LEXIA can adapt to different use cases with ease.
It wasn’t a straight line
Building a virtual assistant framework like LEXIA can often be a bumpy road, so naturally we had to tackle our fair share of challenges. Some of them related specifically to artificial intelligence, others to software engineering more generally.
One specific challenge we faced was the high number of assets for AI available in different technological stacks. For example, we needed to use libraries in Python, models from Tensorflow and Pytorch, and libraries in Haskell. To overcome this challenge, we used a modular architecture, mostly comprising micro-services connected by message queues.
In addition, we had to work hard to complete the design and implementation of LEXIA’s reasoning engine. This challenge enabled us to leverage the team’s experience with more traditional AI techniques. In AI not all problems can or should be solved by applying the latest in machine learning or deep learning. In this case, we ended up testing a hybrid approach by combining machine learning with inference based on logic programming, similar to what can be done with Prolog.
Despite the challenges, we already have plans to continue developing LEXIA. From creating interaction monitoring systems to multi-language interfaces, we hope to build upon the assistant’s current success.
Innovation is at the core of what we do at Critical Software. We always try to push the boundaries of what technology can do for us day in, day out. If you’d like to know more about projects like Lexia, subscribe to our newsletter below.