Announcing New Voice Solution Offerings


We couldn’t be more excited to attend VOICE Summit 2019 this week in Newark, NJ! We’re eager to deliver a session on Building Voice Solutions for the Enterprise and engage with voice experts on the latest perspectives, developments, and trends in this exciting space.

In two previous posts, we shared how consumer engagement with voice drives demand in the enterprise space, and how voice technology is set to evolve rapidly in the next few years to allow features such as speaker recognition and emotional awareness. As voice technology becomes more deeply embedded in homes and workplaces, the technology is set to become a natural part of our lives.

In light of this forward-looking view, Kopius is excited to announce a new set of enterprise voice solution offerings, across both the Microsoft Azure and Amazon AWS platforms.

The newly released Azure Chatbot Services: 8 Week Implementation solution enables customers to get started quickly with their enterprise voice skill. Recognizing that standard access to company-related data and systems can be difficult, Valence is enabling conversational interfaces such as chatbots to provide a much more natural method of communicating with systems to retrieve data and perform actions.

To illustrate the impact of this technology in the enterprise, we have released a new case study on voice-enabled inventory management. Valence worked with SteppIR Communication Systems, a next-generation communications company based in the Pacific Northwest, to deploy a voice-enabled chat bot that provides easy access to key data sets. Built on the Amazon Alexa for Business Platform, our bot integrates with the inventory management system Order Time. SteppIR’s key challenge is to keep their product moving through the pipeline as quickly as possible without bottlenecks. With voice-enabled access built into their inventory management system, anyone can access information on part number levels, order status, and more.

“At Valence we focus on digital transformation technologies and how they work together to deliver real business results for customers,” said Jim Darrin, President at Valence Group. “We believe natural language interfaces — and specifically the ability to access enterprise data with voice commands — is one of the next frontiers in enabling easy access to all sorts of data in the enterprise. Both Microsoft and Amazon are making incredible advancements in fundamental platform capabilities, and we are thrilled to be a partner to both companies in helping translate these cloud services into business solutions for enterprise customers around the world.”

Given the success of this project deployment and the future trend of voice technology, we expect voice will unlock key efficiencies in the enterprise space and create new competitive advantages for companies.

Additional Resources:

Voice Summit: Where Voice is Going


By Ben Parkison

I recently attended the Voice Summit, and have thoughts about where Voice is going.

My daughter and Alexa were born the same year, making them both four years old. My daughter regularly interacts with Alexa, playing music, asking questions, and most importantly making Alexa make animal sounds. From watching this interaction every morning, I can tell you that my four-year-old is a more capable communicator than Alexa. And that’s no knock on Amazon: This is the case for all voice platforms today.

Amazon-Alexa-Echo

While we’re building real value for enterprise clients with voice today, at Valence we’re most excited about what we see coming in the next few years:

Better Technology — These systems are improving every day in their ability to understand inputs and their ability to naturally deliver a response. As this evolution continues, we’ll see more adoption of voice interfaces and more capable solutions.

Access to More Data — Enterprise companies are continuing to adopt modern data management strategies and cloud first designs. The rising tide lifts all ships, and this progression will allow enterprise applications, including voice solutions, to do more.

Systems That Listen — The idea of invoking a voice interface will be an artifact of our time. As voice recognition and contextual understanding gets better and better, voices assistants will be able to listen, know who is speaking, understand when action is needed, and respond immediately.

New Hardware — The number and variety of ways we can access digital assistants will only increase. AR headsets, wearables, and even more amazing tech like this will make voice assistants ubiquitous.

Conversational UIs — There’s more to how we use language today than just the words coming out of our mouths. As AI becomes more integrated with voice interfaces, we will truly move to conversational user interfaces that can visually identify the speaker, detect mood, understand if you are confused, impatient, or interested in the conversation, and more. All of this will allow conversational UIs to become more emotionally aware. Do I want my conference room Alexa device to read my mood? Maybe not. But image an emergency room, or a high stakes negotiation, or a combat scenario, and any other emotionally charged human environment. Then does it make sense for my interface to be more emotionally aware? Maybe.

As these advances continue in the next few months and years, conversational interfaces will be the tip of the spear in how we define AI and how it reshapes our relationship with digital systems.

Additional Resources

Voice Technology: Designing for the Enterprise


By Ben Parkison

Valence has been in the trenches of design, development, and deployment of systems that use voice as an interface at enterprise companies for years. We’ve implemented voice-based solutions in companies ranging from small startups to fortune 50 enterprises. Today I’ll share some of what we’ve learned, what we’ve seen, and why we see voice as a key opportunity for enterprise companies.

Consumer issues are enterprise issues. As voice interfaces become more and more common in the consumer space (more than 120 million smart speakers in the United States!) the same people that use voice at home to shop, control smart homes, and communicate, will begin to expect it as employees and as customers. The result, aside from consumer usage driving demand in an enterprise environment, is that the standards that people expect in terms of UX, flexibility, and intuitiveness in their lives as consumers is exactly what they will expect at work.

Start small, learn, and build. Voice interfaces have one very unique characteristic for development teams: you get to see what works, but also what doesn’t work. Unlike other interfaces, with voice you get this view into what your users tried to do and failed (think when Alexa says “Sorry, I didn’t understand that”), and your list of utterances that didn’t result in an understood and actionable response is both fascinating and incredibly useful. Often times that can be key data for defining the roadmap for your voice interface.

Voice can be a ubiquitous experience. For enterprise apps, spatial placement and utilization of voice interfaces is an important part of the design. Considerations should be made before any code is written regarding not only how a person would interact with the app in any scenario, but also how they might interact differently a multitude of settings. Take some common possibilities for example — if they are at their desk vs. the factory floor, if they’re sitting at a computer or have their hands full with equipment, or if they are in a private office or in a conference room with their peers.

Build voice to play nice with other interfaces — If I’m in Sales I might interact with our CRM in the morning sales meeting, throughout the day via a browser as I’m doing my work, on an app when I’m at a client site, and through integrations with other apps like Outlook or Slack. Adding voice to this ecosystem should be an intentional and well thought out process, leveraging voice where it is powerful, not forcing it where it is not, and allowing these different interfaces to interact with each other to create new opportunities.

Your target user will change. From hiring and attrition to reorgs, what you build today is probably going to be used by a lot of different people with, potentially, a lot of different job titles you didn’t expect. Because of this serious consideration needs to be made to make your app as approachable, intuitive, and as helpful as possible. Use nudges, session and user contexts, follow up prompts, and other best practices to design for a user that may be well versed in the company subject matter, but can have a wide and changing variety in technical comfort and fluency.

Well-designed voice-based solutions for the enterprise will create an intuitive way to access information, streamline workflows, and create a new way to do business across industries. At Valence, we’re excited to build voice-based solutions for our clients to build real value today, while also laying the groundwork for voice to be a key part of how we think about and interact with AI-driven interfaces going forward.

Additional Resources

    The Future of Retail — Trends From NRF


    After a great week at NRF talking tech with industry leaders about the future of retail, experiencing cutting-edge demos, and learning about retail’s biggest challenges and opportunities, we’re back in the office with a heightened appreciation for the exciting possibilities ahead.

    In my last post, I announced our Retail Innovation Accelerator, which is focused on harnessing emergent technologies to develop customer solutions in an agile way. This retail solutions incubator — which is powered by internal innovation projects and strategic partnerships across voice & chat, telemetry & insights, and modern supply chain — provides a framework for our customers to more easily identify the technologies best suited to digitally transform their businesses.

    This week, I’d like to switch gears a bit and discuss some of the trends we saw at NRF. While by no means an exhaustive list, I hope this helps paint the picture of how retailers should think about digital transformation efforts over the next year and beyond.

    1. The whole is greater than the sum of its parts. One thing that stood out compared to past NRF Big Show’s is how technology companies are thinking about the power of multi-platform solutions. To unpack that, it’s becoming more and more evident that there isn’t one technology that can truly transform your retail and brand experience. Rather, a suite of technologies must be integrated into a cohesive, omni-channel strategy to really move the needle. If you are considering RFID sensors to make your dressing room “smart”, pair this with digital signage powered by a recommendation engine to help complete your customers’ outfit. Or how about taking that a step further with an AR-powered selfie app that shows customers what those boots would look like with their new dress? Individual technologies may generate some buzz, but the right suite of technologies can truly transform a retail business.

    2. The year of the edge: connected everything. The best brands react to their customers in real-time, and the only way to do that is by listening smartly. The collection — and more importantly analysis — of data will continue to be one of the primary differentiators between successful and unsuccessful retailers. But how do you capture the kind of meaningful data that yields actionable insights? At Valence, we’ve been working in the IoT space for many years. However it was still eye-opening to see such a strong focus on the “connected everything” store. Edge devices and sensors are seeing exponential advancements in on-board compute power, connectivity, and battery life, while AI-powered cloud services continue to evolve. With technologies like computer vision, it’s amazing how much you can learn about yourself and your customers.

    3. Retail robotics is maturing, but not there yet. We all know robotics in the warehouse is already here, but when will robots be roaming the sales floor? Can a robot re-stock shelves? How about provide wayfinding to a lost customer? We saw some compelling “front-of-house” demos from hardware and software companies alike, but at this time robotics still resonate best when tackling “back-of-house” challenges. However — as with all promising technologies — it’s only a matter of time until the cost-benefit ratio leads retailers to use robotics for more purposes.

    4. Your platform is your product. While we used to only think of your platform — whether web site or store front — as the place to showcase your product, today’s competitive landscape means that your platform might just be your most important product. There are so many ways to buy, it’s important that yours is the easiest, fastest, and has a little extra flavor than the competitor. Improving your platform can be as simple as re-writing your web site copy to better match your brand’s voice, or can be as complex as restructuring your inventory management system and releasing a “buy online, pick up in-store” app. Whatever the approach may be, it’s important to understand how vital your platforms are in differentiating your brand.

    5. Amplify your voice with…voice! It seems obvious, but in order to stay relevant brands must operate at the pace of consumers. And with the exponential evolution cycle in consumer technology, this is getting harder and harder to do. Voice may one day usurp touch as the primary user interface, and it’s important to consider the user flow for customers interacting with your brand on services like Amazon Alexa. We didn’t see as many voice-based demos at NRF as we expected…all the more reason to start investing in this space before your competitors do.

    Next week, I’ll be providing a recap of everything we’ve learned and announced throughout our month of retail. I’m excited to share the opportunities on the horizon for all retailers who are ready to adopt emergent technologies.

    Voice and Chat: Cornerstones of Digital Transformation


    We are excited to announce today the release of two new innovation programs related to voice and chat technologies! The two releases include the Healthcare Experience Innovation Accelerator as well as an internal, employee-focused “voice bot” framework based on Amazon Alexa for Business technologies.

    Our Healthcare Experience Innovation Accelerator is focused on accelerating customer projects related to understanding and applying voice-related technologies, such as Microsoft Cortana and Amazon Alexa, in real-life healthcare situations. We have been exploring all the different natural language processing services that both Microsoft and Amazon are releasing at an increasingly rapid pace and wanted to apply them in a real-life scenario — one that we thought could stand a bit of fixing: healthcare information discovery, appointment scheduling, and patient processing. Don’t get us wrong — we know healthcare is complicated — but we are hoping our efforts perhaps spark some imaginations around the industry on what is possible.

    Our framework builds on existing voice and chat technologies and adds some healthcare specific natural language experiences. This is the third Innovation Accelerator we have built, coming after the release of the HoloLens Innovation Accelerator this past May and the Blockchain Innovation Accelerator released this past July. Our Innovation Team is thinking daily about how to apply our pillars of digital transformation in new and exciting ways to help customers “jumpstart” real-life solutions.

    Today we also released our employee-focused voice skill called “Valence Bot”. When you start at Valence, you are given an Echo Dot as one part of your onboarding hardware package — right alongside your computer. We use the Amazon Alexa for Business platform and have built a private enterprise skill to provide access via voice commands to all the corporate information an employee needs to get their job done, including human resources information (benefits, employee count, and more) as well as access to corporate systems like IT requests, CRM data, and more. If you want to see more, you can find details in the video we made for the Amazon Alexa for Business “This is My Skill” showcase.

    Additional Resources:

    Transforming the Employee Experience with Alexa for Business


    At Kopius*, we help enterprise customers worldwide understand and apply next-generation technologies in smart and innovative ways to advance their business goals. These goals often include improving the employee experience.

    Across modern technologies such as voice and chat, artificial intelligence, robotics, augmented and virtual reality, blockchain and more, Kopius has a wide range of innovation and consulting capabilities that deliver forward-thinking, quick-to-market and cost-effective solutions to our customers.

    One of the mantras we follow is “learn by doing.” We believe that by providing our own employees with instantiations of the same solutions we build for our customers, employees will not only understand our business better, but will be more interested and involved in what we do.

    This is why we built the “Valence Bot” for our employees — to provide them with a tool that not only helps them with their daily jobs, but also helps them understand modern voice and chat platforms — one of our key technology pillars.

    What is Valence Bot?

    Valence Bot is a chatbot that uses Amazon Alexa, Amazon’s virtual assistant, to answer employee questions and handle requests related to the company. Employees can obtain company information and make requests simply by interacting with Alexa from any personal Alexa device. This includes things like:

    · Obtaining employee benefits information such as healthcare provider information, group policy numbers, and paid time off (PTO) information.

    · Requesting IT support or submitting facility requests.

    · Accessing personalized information based on job role, such as company sales data for the Executive team, or new hire data for the Recruiting team.

    How Does it Work?

    Valence Bot is built on top of the core framework outlined in the Question and Answer Bot blog entry originally posted a year ago in the AWS Machine Learning Blog. After adding additional functionality and integration points to this framework, the solution is bundled into an Alexa private skill and deployed to employees using Alexa for Business. Every new Kopius employee is given an Amazon Echo device during onboarding. They are also enrolled into our Alexa for Business organization, allowing them access to the private Valence Bot skill from anywhere.

    The high-level architecture of Valence Bot is shown in the following diagram:

    Details of each component are as follows:

    • Alexa Device — A personal Alexa device that employees use to interact with Valence Bot. Kopius gives all employees an Echo Dot, but they can access Valence Bot through any existing Alexa device they may already own. Access through the Alexa mobile app is also possible.
    • Valence Bot — This is the private Alexa skill that gets deployed to each employee that serves as the user-facing interface to information and requests.
    • Alexa for Business — The management layer that is used to deploy the private skill to all employees.
    • Chat Engine — Built on Amazon Lex, the chat engine is ultimately what gets exported to the private Alexa skill. Lex is configured with just one slot and intent to capture the text of the question for further processing.
    • API Services — A web interface (protected by Cognito authentication) accessing an Amazon API Gateway instance provides administrative access to manage content served by Valence Bot.
    • Content Datastore — An Amazon Elasticsearch instance provides the mechanism to search the question and answer data for the best responses. In addition to simply returning an answer, a “hook” can be assigned to questions which enables custom Lambda functions to be triggered for the purpose of obtaining additional information or handling requests.
    • Fulfillment — AWS Lambda functions are used to serve two purposes in this architecture:
      • Interface from which Valence Bot interacts directly to provide answers to questions and fulfill requests. In this scenario, a Lambda function queries Elasticsearch and returns the best answer.
      • Seeks out additional information beyond simple question and answer data in Elasticsearch. In this scenario, a hook defined in Elasticsearch triggers additional Lambda functions that then fetch additional information or handle requests. Example include integration with 3rd party APIs to obtain sales data or submit service requests.
    • Notifications — Amazon notification services such as Amazon SES and SNS notify the user when appropriate, such as when their service request has been submitted, including information around expected SLAs.

    What Do Our Employees Think?

    Teah Delfino, our Director of Recruiting, explains the benefits of Valence Bot from her perspective:

    “When I brought it home and set it up, I thought I wouldn’t use it much — I’m in HR so I have access to a lot of company information. But I find I use Valence Bot all the time — it’s so convenient to simply say, “Alexa, ask Valence Bot when our health insurance renews,” or “Alexa, ask Valence Bot who we’ve hired in the past month.” It’s also a compelling story to tell when I’m speaking with potential employees. I get a lot of positive feedback about what a unique benefit it is and how excited they are that we live in the world we work.”

    Interested in Learning More?

    Interested in learning more about Valence Bot or how voice and chat technology could improve your employee’s experiences? Contact us and we’ll start you off with a demo, to show how remarkable this technology can be!

    *Kopius performed this work under its previously known business name, Valence.

    Additional Resources:

    Chatbots: Much More Than A Novelty

    Chatbots: Much More Than A Novelty

    The promise of Artificial Intelligence and chatbots is here.

    Sure, humanoid robots s aren’t yet roaming the earth, but AI-induced applications and AI-infused services are transforming the world around us into a more intelligent, interactive, and empowered domain. Looking for a good example? Ask Siri, Alexa, Cortana, or CleverBot. They, collectively, are the answer.

    Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Cleverbot are all examples of chatbots — “a computer program which conducts a conversation via auditory or textual methods.” Some chatbots use natural language processing ability to understand your speech and then respond verbally. Apple’s Siri is perhaps the most famous example of this type of chatbot, though Alexa and Cortana are also widely used. Other chatbots are text-based, responding to typed questions, commands, or observations. Microsoft’s Xiaoice, for example, was released in China in 2014 and, as of only a year later, had already been used by over 40 million smartphone owners (25% of whom had reportedly said “I love you” to their “virtual friend,” which is available on China’s two most prominent social media platforms — Weibo and WeChat).

    Chatbots have been the subject of controversy — see Microsoft’s Tay — and frequent comic derision — see, e.g. Siri. More generally, many people see them as little more than a novelty — a fun way for consumers to interact with technology. But they are much much more than that. Simply put, chatbots are a powerful example of the proliferation of Artificial Intelligence into mainstream society. And we are just scratching the surface of their capabilities.

    To-date, the landscape of chatbots available for consumers and enterprises has been dominated largely by the tech titans mentioned above. It is in the process, though, of getting significantly more diverse and dynamic, a phenomenon driven by the release of numerous chatbot frameworks for developers.

    Chatbot frameworks are essentially software development kits (SDKs) for the AI-verse. They provide a platform — the technology infrastructure — for developers to build chatbots in a manner which meets their needs. The release of frameworks like Microsoft’s Bot Framework and Facebook’s Bot Engine (wit.ai) means that any developer, be they a hobbyist or professional service provider, can build a chatbot to improve their life or the lives of those around them.

    Want to build a chatbot that speaks to you in Captain Hook lingo in time for the annual Talk Like a Pirate Day (September 19)? Have at it! Think your business can benefit from a chatbot designed to provide a more intuitive way to access and organize the data that fuels your success? Build it!

    …or let us build it! Valence understands that chatbots are more than a novelty; they are a paradigm shifting technology that can digitally transform businesses in any sector. That’s why we’re putting them to work for our clients in ways that support both their strategic objectives and their day-to-day tactics. And that’s why we’re looking forward to learning how we can put them to work for you.

    Additional Resources

    Natural Language Understanding: It’s Not Just Chat

    Natural Language Understanding: It’s Not Just Chat

    The recent standardization of speech recognition and natural language understanding (NLU) technologies has led to the first step in enterprises embracing newly enabled capabilities: The chatbot. Beginning in 2016, enterprise businesses began to seriously invest in chat as way to rethink interfaces to business data, to streamline how employees and customers access services, and to enable cost-saving strategies such as self-service. However, chatbots are simply the first foray into new capabilities enabled by NLU, and the next round will see the emergence of systems that leverage NLU to optimize the interface between the human and the digital to maximize value.

    Soon, over her morning coffee, a sales director will be able to speak out loud about her previous day. Her end-to-end NLU-enabled system will respond by automatically queuing up dynamic reports on her screen, optimizing her to-do list for the new day based on her past behavior and serving that automatically to her device, reaching out to the appropriate team members and scheduling her meetings for the week, all while holding a conversation with her about her recommended weekly plan via her digital assistant.

    This will all be possible with systems optimized based on how humans communicate best, how digital systems process information, and how the two work together.

    Letting Humans be Human
    Speech to text, NLU, and the standardization of how developers interact with these algorithms has, for the first time in decades, upended how humans can pass information to digital systems. These technologies are undergoing their own renaissance for the plain fact that they allow humans to communicate in the way we’ve been doing it for millenia: speech. Speech allows users to rely on their own ability to communicate nuanced, precise flows of information in a way that is so natural, it far outstrips the standard of touch-and-keyboard in terms of efficiency and accuracy.

    Letting Digital Systems do Their Thing
    Chatbots have been the first step because they are what we know. Starting with the first messenger applications, we’ve been sending and receiving information in natural language formats for decades. This paradigm is a natural first step for NLU. But, we are now at the cusp of seeing a revolution in how digital systems process human speech and provide information back to the user in all the ways digital systems are able (and in the ways humans cannot): dynamic data explorations, virtual reality experiences, pervasive multi-device workflows and more.

    A Perfect Balance
    By marrying Natural Language Understanding with the full menu of existing and future digital communication models, we can optimize the passing of every piece of information to fit the specific communication style and need of the sender and receiver. The result: enterprise systems that fundamentally change core business technologies, leading the way to efficiencies, cost savings and new business models.

    Additional Resources: