Training the Machines: An Introduction To Types of Machine Learning


by Yuri Brigance

I previously wrote about deep learning at the Edge. In this post I’m going to describe the process of setting up an end-to-end Machine Learning (ML) workflow for different types of machine learning.

There are three common types of machine learning training approaches, which we will review here:

  1. Supervised
  2. Unsupervised
  3. Reinforcement

And since all learning approaches require some type of training data, I will also share three methods to build out your training dataset via:

  1. Human Annotation
  2. Machine Annotation
  3. Synthesis / Simulation

Supervised Learning:

Supervised learning uses a labeled training set of both inputs and outputs to teach a model to yield the desired outcome. This approach typically relies on a loss function, which is used to evaluate training accuracy until the error has been sufficiently minimized.

This type of learning approach is arguably the most common, and in a way, it mimics how a teacher explains the subject matter to a student through examples and repetition.

One downside to supervised learning is that this approach requires large amounts of accurately labeled training data. This training data can be annotated manually (by humans), via machine annotation (annotated by other models or algorithms), or completely synthetic (ex: rendered images or simulated telemetry). Each approach has its pros and cons, and they can be combined as needed.

Unsupervised Learning:

Unlike supervised learning, where a teacher explains a concept or defines an object, unsupervised learning gives the machine the latitude to develop understanding on its own. Often with unsupervised learning, the machines can find trends and patterns that a person would otherwise miss. Frequently these correlations elude common human intuition and can be described as non-semantic. For this reason, the term “black box” is commonly applied to such models, such as the awe-inspiring GPT-3.

With unsupervised learning, we give data to the machine learning model that is unlabeled and unstructured. The computer then identifies clusters of similar data or patterns in the data. The computer might not find the same patterns or clusters that we expected, as it learns to recognize the clusters and patterns on its own. In many cases, being unrestricted by our preconceived notions can reveal unexpected results and opportunities.   

Reinforcement Learning:

Reinforcement learning teaches a machine to act in a semi-supervised approach. The machines are rewarded for correct answers, and the machine wants to be rewarded as much as possible. Reinforcement learning is an efficient way to train a machine to learn a complicated task, such as playing video games or teaching a legged robot to walk.

The machine is motivated to be rewarded, but the machine doesn’t share the operator’s goals. So if the machine can find a way to “game the system” and get more reward at the cost of accuracy, it will greedily do so. Just as machines can find patterns that humans miss in unsupervised learning, machines can also find missed patterns in reinforcement learning, and exploit those invisible patterns to receive additional reinforcement. This is why your experiment needs to be airtight to minimize exploitation by the machines.

For example, an AI twitterbot that was trained with reinforcement learning was rewarded for maximizing engagement. The twitterbot learned that engagement was extremely high when it posted about Hitler.

This machine behavior isn’t always a problem – for example reinforcement learning helps machines find bugs in video games that can be exploited if they aren’t resolved.

Datasets:

Machine Learning implies that you have data to learn from. The quality and quantity of your training data has a lot to do with how well your algorithm can perform. A training dataset typically consists of samples, or observations. Each training sample can be an image, audio clip, text snippet, sequence of historical records, or any other type of structured data. Depending on which machine learning approach you take, each sample may also include annotations (correct outputs / solutions) that are used to teach the model and verify the results. Training datasets are commonly split into groups where the model only trains on a sub-set of all available data. This allows a portion of the dataset to be used for validation of the model, to ensure that the model has generalized enough data to perform well on data it has not seen before.

Regardless of which training approach you take, your model can be prone to bias which may be inadvertently introduced through unbalanced training data, or selection of the wrong inputs. One example is an AI criminal risk assessment tool used by courts to evaluate how likely a defendant is to reoffend based on their profile as input. Because the model was trained on historical data, which included years of disproportionate targeting by law enforcement of low-income and minority groups, the resulting model produced higher risk scores for low-income and minority individuals. It is important to remember that most machine learning models pick up on statistical correlations, and not necessarily causations.

Therefore, it is highly desirable to have a large and balanced training dataset for your algorithm, which is not always readily available or easy to obtain. This is a task which may initially be overlooked by businesses excited to apply machine learning to their use cases. Dataset acquisition is as important as the model architecture itself.

One way to ensure that the training dataset is balanced is through Design of Experiments (DOE) approach, where controlled experiments are planned and analyzed to evaluate the factors which control the value of an output parameter or group of parameters. DOE allows for multiple input factors to be manipulated, determining their effect on the model’s response. Thus, giving us the ability to exclude certain inputs which may lead to biased results, as well as gain a better understanding of the complex interactions that occur inside the model.

Here are three examples of how training data is collected, and in some cases generated:

  1. Human Labeled Data:

What we refer to human labeled data is anything that has been annotated by a living human, either through crowdsourcing or by querying a database and organizing the dataset. An example of this could be annotating facial landmarks around the eyes, nose, and mouth. These annotations are pretty good, but in certain instances can be imprecise. For example, the definition of “the tip of the nose” can be interpreted differently by different humans who are tasked with labeling the dataset. Even simple tasks, like drawing a bounding box around apples in photos can have “noise” because the bounding box may have more or less padding, may be slightly off center, and so on.

Human labeled data is a great start if you have it. But hiring human annotators can be expensive and prone to error. Various services and tools exist, from AWS SageMaker GroundTruth to several startups which make the labeling job easier for the annotators, and also connect annotation vendors with clients.

It might be possible to find an existing dataset in the public domain. In an example with facial landmarks, we have WFLW, iBUG, and other publicly available datasets which are perfectly suitable for training. Many have licenses that allow commercial use. It’s a good idea to research whether someone has already produced a dataset that fits your needs, and it might be worth paying for a small dataset to bootstrap your learning process.

2. Machine Annotation:

In plain terms, machine annotation is where you take an existing algorithm or build a new algorithm to add annotations to your raw data automatically. It sounds like a chicken and egg situation, but it’s more feasible than it initially seems.

For example, you might already have a partially labeled dataset. Let’s imagine you are labeling flowers in bouquet photos, and you want to identify each flower. Maybe you had some portion of these images already annotated with tulips, sunflowers, and daffodils. But there are still images in the training dataset that contain tulips which have not been annotated, and new images keep coming in from your photographers.

So, what can you do? In this case, you can take all the existing images where the tulips have already been annotated and train a simple tulip-only detector model. Once this model reaches sufficient accuracy, you can fill in the remaining missing tulip annotations automatically. You can keep doing this for the other flowers. In fact, you can crowdsource humans to annotate just a small batch of images with a specific new flower, and that should be enough to build a dedicated detector that can machine-annotate your remaining samples. In this way, you save time and money by not having humans annotate every single image in your training set or every new raw image that comes in. The resulting dataset can be used to train a more complete production-grade detector, which can detect all the different types of flowers. Machine annotation also gives you the ability to continue improving your production model by continuously and automatically annotating new raw data as it arrives. This achieves a closed-loop continuous training and improvement cycle.

Another example is where you have incompatible annotations. For example, you might want to detect 3D positions of rectangular boxes from webcam images, but all you have are 2D landmarks for the visible box corners. How do you estimate and annotate the occluded corners of each box, let alone figure out their position in 3D space? Well, you can use a Principal Component Analysis (PCA) morphable model of a box and fit it to 2D landmarks, then de-project the detected 3D shape into 3D space using camera intrinsics . This gives you full 3D annotations, including the occluded corners. Now you can train a model that does not require PCA fitting.

In many cases you can put together a conventional deterministic algorithm to annotate your images. Sure, such algorithms might be too slow to run in real-time, but that’s not the point. The point is to label your raw data so you can train a model, which can be inferenced in milliseconds.

Machine annotation is an excellent choice to build up a huge training dataset quickly, especially if your data is already partially labeled. However, just like with human annotations, machine annotation can introduce errors and noise. Carefully consider which annotations should be thrown out based on a confidence metric or some human review, for example. Even if you include a few bad samples, the model will likely generalize successfully with a large enough training set, and bad samples can be filtered out over time.

3. Synthetic Data

With synthetic data, machines are trained on renderings or in hyper-realistic simulations – think of a video game of a city commute, for example. For Computer Vision applications, a lot of synthetic data is produced via rendering, whether you are rendering people, cars, entire scenes, or individual objects. Rendered 3D objects can be placed in a variety of simulated environments to approximate the desired use case. We’re not limited to renderings either, as it is possible to produce synthetic data for numeric simulations where the behavior of individual variables is well known. For example, modeling fluid dynamics or nuclear fusion is extremely computationally intensive, but the rules are well understood – they are the laws of physics. So, if we want to approximate fluid dynamics or plasma interactions quickly, we might first produce simulated data using classical computing, then feed this data into a machine learning model to speed up prediction via ML inference.

There are vast examples of commercial applications of synthetic data. For example, what if we needed to annotate the purchase receipts for a global retailer, starting with unprocessed scans of paper receipts? Without any existing metadata, we would need humans to manually review and annotate thousands of receipt images to assess buyer intentions and semantic meaning. With a synthetic data generator, we can parameterize the variations of a receipt and accurately render them to produce synthetic images with full annotations. If we find that our model is not performing well under a particular scenario, we can just render more samples as needed to fill in the gaps and re-train.

Another real-world example is in manufacturing where “pick-and-place” robots use computer vision on an assembly line to pack or arrange and assemble products and components. Synthetic data can be applied in this scenario because we can use the same 3D models that were used to create injection molds of the various components to make renderings as training samples that teach the machines. You can easily render thousands of variations of such objects being flipped and rotated, as well as simulate different lighting conditions. The synthetic annotations will always be 100% precise.

Aside from rendering, another approach is to use Generative Adversarial Network (GAN) generated imagery to create variation in the dataset. Training GAN models usually requires a decent number of raw samples. With a fully trained GAN autoencoder it is possible to explore the latent space and tweak parameters to create additional variation. Although it’s more complex than classical rendering engines, GANs are gaining steam and have their place in the synthetic data generation realm. Just look at these generated portraits of fake cats! 

Choosing the right approach:

Machine learning is on the rise across industries and in businesses of all sizes. Depending on the type of data, the quantity, and how it is stored and structured, Valence can recommend a path forward which might use a combination of the data generation and training approaches outlined in this post. The order in which these approaches are applied varies by project, and boils down to roughly four phases:

  1. Bootstrapping your training process. This includes gathering or generating initial training data and developing a model architecture and training approach. Some statistical analysis (DOE) may be involved to determine the best inputs to produce the desired outputs and predictions.
  2. Building out the training infrastructure. Access to Graphics Processing Unit (GPU) compute in the cloud can be expensive. While some models can be trained on local hardware at the beginning of the project, long-term a scalable and serverless training infrastructure and proper ML experiment lifecycle management strategy is desirable.
  3. Running experiments. In this phase we begin training the model, adjusting the dataset, experimenting with the model architecture and hyperparameters. We will collect lots of experiment metrics to gauge improvement.
  4. Inference infrastructure. This includes integrating the trained model into your system and putting it to work. This can be cloud-based inference, in which case we’ll pick the best serverless approach that minimizes cloud expenses while maximizing throughput and stability. It might also be edge inference, in which case we may need to optimize the model to run on a low-powered edge CPU, GPU, TPU, VPU, FPGA, or a combination of thereof. 

What I wish every reader understood is that these models are simple in their sophistication. There is a discovery process at the onset of every project where we identify the training data needs and which model architecture and training approach will get the desired result. It sounds relatively straight forward to unleash a neural network on a large amount of data, but there are many details to consider when setting up Machine Learning workflows. Just like real-world physical research, Machine Learning requires us to up a “digital lab” which contains the necessary tools and raw materials to investigate hypotheses and evaluate outcomes – which is why we call AI training runs “experiments”. Machine Learning has such an array of truly incredible applications that there is likely a place for it in your organization as part of your digital journey.

JumpStart Your Machine Learning Success

Kopius supports businesses seeking to govern and utilize AI and ML to build for the future. We’ve designed a program to JumpStart your customer, technology, and data success. 

Tailored to your needs, our user-centric approach, tech smarts, and collaboration with your stakeholders, equip teams with the skills and mindset needed to:

  • Identify unmet customer, employee, or business needs
  • Align on priorities
  • Plan & define data strategy, quality, and governance for AI and ML
  • Rapidly prototype data & AI solutions
  • And, fast-forward success

Partner with Kopius and JumpStart your future success.


Related Services:

Additional resources:


A New Approach to Quality Assurance


Quality Assurance

There’s no single path that can bring someone into a career in technology, and Quality Assurance is a common entry point throughout the tech industry, including at Kopius.

Someone in QA is responsible for improving software development processes and preventing defects in production. The truth is that the industry hasn’t done a great job of making QA a great job. It’s common to hear stories about long thankless hours, short notices, disorganized processes, and the after-thought QA engineers get to the process. We’ve worked hard to get QA right at Valence.

We have built QA into our agile engineering processes, which is as good for our employees as it is for our clients. What we do with QA is common for Agile but for the gaming industry It’s not typical process, structure, or employment path, so we have outlined a few of the things that make our QA program unique at Valence.

The driving principle of our QA program is that we recognize that our QA engineers are vital members of a cross-functional team. We are invested in each QA engineer and their career path, making mutual long-term commitments. Valence is in the software industry, where QA engineers can enjoy greater career longevity, professional growth, and be ahead of the curve with latest and greatest technologies.

A typical day in QA at Valence 

“My main objective is to deliver a quality product to our customers. We test products from CX perspectives and from functionality perspectives to ensure best-in-class experiences on all platforms like web browsers, mobile, and tablets.”

Raanadil Shaikh, Quality Assurance Lead

Our QA team works on technology-driven projects, testing features across multiple platforms, focusing on the end-customer and their experiences.  

Valence’s teams follow a standard agile scrum practice. We involve the QA engineers as we are starting and finishing sprints so we can include the QA team’s needs and expectations in our planning. Like many technology firms, our QA team uses pull-requests to trigger testing. The testing person runs test cases as defined in the sprint planning. If the feature meets the acceptance criteria, they’ll be validated.  

One of the features of the QA process that makes it so effective is that it is flexible.

“The QAs at Valence have a lot of flexibility to create the structure and testing that is right for the project. In previous QA roles, I’ve had to follow a very strict prescribed process, even if it wasn’t right for the project. That’s not the case here.”

Emily Bright, QA Analyst

Bright adds, “QA is heavily impacted by the people you work with. Our job is to find issues with the work that other people do. Everyone at Valence is very open and accepting of the QA team’s input, and they tend to presume that the QA engineer is correct, which is really nice.” 

What’s also unique is that we share tools across teams as much as possible. Our QA team uses the same tools as the rest of the Engineering team. Our QA team is empowered to work with different versions of code (using the GIT command line) and deploying builds to their local machine/cloud stack. Our QA team doesn’t write code, but they interact with code. This isn’t as common a practice at other firms as it should be. 

Automation is a big part of QA at Valence, which dramatically improves the work experience of our team. We use automated testing, so our QA engineers don’t have to repeatedly run the same tedious processes, which is exhausting and uninspiring. Thanks to automated Acceptance Tests, we get to focus our QA engineers’ attention on the more interesting aspects of the project, the new features, the Ad Hoc, and the end-to-end customer experience. It’s a big part of the reason that our QA team is happier than most.  

“Monotony is the biggest risk of many other common QA roles, but that’s not the case at Valence because of the variety of projects and technologies we use,” according to Jaison Wattula, who oversees Valence’s QA program. Shaikh agrees, “I’m challenged every day because I’m not limited to one product for a long time – I’m testing the latest products on different platforms, which is really cool.” 

Additionally, Valence QA engineers are often front and center with the client, our developers, and their peers. Our QA team touches every feature and needs to understand the project goals, development principles, and approach as well as any other member of the team. Since they are the first line of defense against bugs and errors, the project goes better when the QA engineers collaborate with cross-functional teams (including clients) and participate in decision-making.  

“Valence is a special place because there is a real appreciation for diverse perspectives and viewpoints – I love collaborating with coworkers and clients to find the right innovation for a project.”

Raanadil Shaikh, Quality Assurance Lead

Bright adds, “When I find issues (which is the fun part), I usually either come up with a solution, offer a workaround, or find another way to help the team solve the bug. This is faster, more collaborative, and an important part of my contribution.”  

While we have typical days, we don’t have typical projects. Our QA team needs to be comfortable interacting with new and emerging technologies. Valence has a varied roster of services and technologies, and our QA team interacts with all of it.  

Who thrives in QA at Valence? 

Our team is successful because the people here are passionate tech enthusiasts who are detail-oriented, curious, and want to contribute to the whole development process.  

Valence has an Always Learning culture, and this is particularly true of the QA team. People who love learning new technologies, platforms, tools, and best practices thrive here. “There are never-ending learning possibilities,” says Shaikh. 

Ambitious tech enthusiasts who want to use QA as a stepping stone to other parts of the industry do well here. While it’s rare to transition from QA to code writing, the QA engineer role is a clear incubator role to grow into other technology positions and expand skills. Because our QA engineers are exposed to the process and every role within that process, they are uniquely positioned to choose their next career step within Valence. QA engineers can grow into project coordination, project management, dashboarding and visualization, and more. It’s the right place to start if you want to grow into the non-code and more abstract technology roles where you guide and support client projects.  

“We work hard to hire the right people for our QA team, and even with all that effort, nothing makes me happier than seeing a member of our QA team get promoted into other areas of the business.”

Jaison Wattula, Director of Reporting and Automation

Does this sound like you? We’re hiring! You can find the Quality Assurance Lead job description here. 

Technology is Your Untapped Weapon in the Talent War


By Sarah Hansen

The demand for skilled labor and the rigors needed to keep employees engaged and committed in this highly competitive market are well-documented. CEOs and CHROs have been talking about the talent war since before our company was founded. And based on recent analysis, company leaders are showing some serious battle scars as the demand for talent is beginning to overtake COVID response as a top concern for leadership.

These concerns are compounded by signs of a potential hangover from the COVID pandemic, which is a “resignation boom” or “turnover tsunami”. A March 2021 survey by Prudential found that 1 in 4 workers are thinking about resigning, whether it’s to seek adventure, recharge, or because they are rethinking life choices.

The mixed messages about job reports, unemployment, and recruiting challenges can be a lot to take in. One truth amid these trends is that technology has a central role to play in the talent strategy for organizations.

There are several reasons that digital transformation needs to be part of your business strategy, and recruiting is quickly moving to the top of the list.

According to a study by Monster, 82% of employers are confident that they will be ready to ramp up hiring this year, and 40% of respondents are filling new roles on top of the vacancies created by the Pandemic economy. At Kopius, we are certainly among them — as VP of People and Talent, I am experiencing the intense hiring environment first-hand.

Here are a few of my key takeaways as I go back into battle in this talent war:

Interviewing/Recruiting

The upside to remote recruiting is that we can more easily schedule interview panels so that candidates can meet more of the team in a shorter period. This allows greater scheduling flexibility, and more importantly, it gives people greater access to each other. If your recruiting strategy includes pursuing passive candidates, remote interviewing also makes it easier for candidates to schedule a conversation without disrupting their commitments to their current employer. Obviously, we are excited about the role that virtual reality could play in the recruiting process, but there are also some mainstream technologies to support remote recruiting, such as:

  • Video conferencing
  • Scheduling apps
  • Recruitment marketing automation
  • Mobile-first recruiting assets
  • Applicant tracking systems
  • Skills assessment platforms

Remote Work, Culture, and Retention

This issue is close to my heart — we work hard to find the best talent, and we’ve curated a company culture that is compelling, motivating, and engaging. How does that culture shift when we aren’t in the office together? The human-to-human connection starts by having the right people in the company to start with.

The number of remote workers increased by 140% between 2005 and 2020, and those numbers will only go up post-pandemic. Having the infrastructure in place to onboard, engage, and collaborate with remote workers increases the talent pool. And our internal employee satisfaction surveys support what many third-party studies are finding, which is that for certain workers, satisfaction is higher when remote work is an option. If you are filling technical positions, it’s also helpful to have remote accessible assessment tools to replace those live white boarding exercises from pre-pandemic recruiting. Anecdotal feedback has been that the candidates also find this less stressful, and they feel their performance is better reflected without the white-board time constraints. The essentials to support remote work and culture include:

  • Mobile-friendly platforms
  • Streamlined and consistent data infrastructure and document management
  • Internal social media
  • Video conferencing
  • Chat
  • Virtual white boards and collaboration platforms
  • Survey tools to check in on employee sentiment and satisfaction

Work is changing, business is evolving, and technology is at the center of everything. Even with the unknowns facing the business community in the next many months, there is no doubt that technology is going to be a significant piece of the puzzle.

And since I’ve got you here, we’re hiring! If you are interested in working with a responsible company with an amazing team at the intersection of innovation and inspiration, check out our careers page!

Additional Resources:

Deep Learning at the Edge


By Yuri Brigance

What is deep learning?

Deep learning is a subset of machine learning where algorithms inspired by the human brain learn from large amounts of data. Machines use those algorithms to repeatedly perform a task so it can gradually improve outcomes. The process includes deep layers of analysis that enable progressive learning. Deep learning is part of the continuum of artificial intelligence and is resulting in breakthroughs in machine learning that are creative, exciting, and sometimes surprising. This graphic published by Nvidia.com provides a simplified explanation of deep learning’s place in the progression of artificial intelligence advancement.

Deep learning is part of the continuum of artificial intelligence.

Deep learning has an array of commercial uses. Here’s one example: You are a manufacturer and different departments on the factory floor communicate via work order forms. These are hand-written paper forms which are later manually typed into the operations management system (OMS). Without machine learning, you would hire and train people to perform manual data entry. That’s expensive and prone to error. A better option would be to scan your forms and use computers to perform optical character recognition (OCR). This allows your workers to continue using a process they are familiar with, while automatically extracting the relevant data and ingesting it into your OMS in real-time. With machine learning and deep learning, the cost is a fraction of a cent per image. Further, predictive analytics models can provide up-to-date feedback about process performance, efficiency, and bottlenecks, giving you significantly better visibility into your manufacturing operation. You might discover, as one of our customers did, that certain equipment is under-utilized for much of the day, which is just one example of the types of “low hanging fruit” efficiency improvements and cost savings enabled by ML.

What are the technologies and frameworks that enable Deep Learning?

While there are several technology solutions on the market to enable deep learning, the two that rise to the top for us right now are PyTorch and TensorFlow. They are equally popular, and currently dominate the marketplace. We’ve also taken a close look at Caffe and Keras, which are slightly less popular but still relevant alternatives. That said, I’m going to focus on PyTorch and TensorFlow because they are market leaders today. To be transparent, it’s not clear that they are leading the market because they are necessarily better than the other options. TensorFlow is a Google product, which means that TensorFlow benefits from Google’s market influence and integrated technologies. TensorFlow has a lot of cross-platform compatibility. It is well-supported on mobile, edge computing devices, and web browsers. Meanwhile, PyTorch is a Facebook product, with Facebook being significantly invested in machine learning. PyTorch was built as the native Python machine learning framework, and now includes C++ APIs, which gives it a market boost and feature parity with TensorFlow.

Deploying to the Edge

What about deploying your models to the edge? My experience with edge ML workloads started a while back when I needed to get a model running on a low-powered Raspberry Pi device. At the time the inference process used all the available CPU capacity and could only process data once every two seconds. Back in those days you only had a low-powered CPU and not that much RAM, so the models had to be pruned, quantized, and otherwise slimmed down at the expense of reducing prediction accuracy. Drivers and dependencies had to be installed for a specific CPU architecture, and the entire process was rather convoluted and time consuming.

These days the CPU isn’t the only edge compute device, nor the best suited to the task. Today we have edge GPUs (ex: NVIDIA Jetson), Tensor Processing Units (TPU), Vision Processing Units (VPU), and even Field-Programmable Gate Arrays (FPGA) which are capable of running ML workloads. With so many different architectures, and new ones coming out regularly, you wouldn’t want to engineer yourself into a corner and be locked to a specific piece of hardware which could become obsolete a year from now. This is where ONNX and OpenVINO come in. I should point out that Valence is a member of Intel’s Partner Alliance program and has extensive knowledge of OpenVINO.

ONNX Runtime is maintained by Microsoft. ONNX is akin to a “virtual machine” for your pre-trained models. One way to use ONNX is to train the model as you normally would, for example using TensorFlow or PyTorch. Conversion tools can convert the trained model weights to the ONNX format. ONNX supports a number of “execution providers” which are devices such as the CPU, GPU, TPU, VPU, and FPGA. The runtime intelligently splits up and parallelizes the execution of your model’s layers among the different processing units available. For example, if you have a multi-core CPU, the ONNX runtime may execute certain branches of your model in parallel using multiple cores. If you’ve added a Neural Compute Stick to your project, you can run parts of your model on that. This greatly speeds up inference!

OpenVINO is a free toolkit facilitating the optimization of a deep learning model from a framework and deployment using an inference engine onto compatible hardware. It has two versions including an opensource version and one that is supported by Intel. OpenVINO is an “execution provider” for ONNX, which means it allows you to deploy an ONNX model to any compatible device (even FPGA!) without writing platform-specific code or cross-compiling. Together ONNX and OpenVINO provide the ability to run any model on any combination of compute devices. It is now possible to deploy complex object detection, and even segmentation models on devices like humble webcams equipped with an inexpensive onboard VPU. Just like an octopus has multiple “brains” in its tentacles and head, your system can have multiple edge inference points without the need to execute all models on a single central node or stream raw data to the cloud.

Thanks to all these technologies, we can deploy machine learning models and deep learning programs on low powered devices, often without even requiring an internet connection. And even using these low powered devices, the deep learning projects are producing clever results where the machines learn to identify people, animals, and other dynamic and nuanced objects.

What should you do with this information?

This depends on who is reading this. If you are a business owner or operator who is curious about machine learning and deep learning, your key takeaway is that any data scientist you work with should have a mastery of these technologies.

If you are a data scientist, consider these technologies to be must-haves on your skill inventory.

JumpStart Deep Learning for Your Business

Kopius supports businesses seeking to govern and utilize AI and ML to build for the future. We’ve designed a program to JumpStart your customer, technology, and data success. 

Tailored to your needs, our user-centric approach, tech smarts, and collaboration with your stakeholders, equip teams with the skills and mindset needed to:

  • Identify unmet customer, employee, or business needs
  • Align on priorities
  • Plan & define data strategy, quality, and governance for AI and ML
  • Rapidly prototype data & AI solutions
  • And, fast-forward success

Partner with Kopius and JumpStart your future success.


Related Services:

Additional Resources:


Let’s talk about Unified Data Governance (UDG)


By Jim Darrin

Unified Data Governance (also known as United Data Governance), or UDG, describes the process of consolidating disparate data sources to create a single data narrative across the myriad data stores within an organization.

Technology is at the heart of every modern company, and data management is more than a side effect of a business. Rather, data is an asset and a risk factor that increases in importance as businesses grow and move along the arc of their digital maturation.

According to McKinsey, only a small fraction of companies effectively leverage data-informed decision-making strategies, yet those that are making data-informed decisions have outperformed competitors by 85% in sales growth and by more than 25% in gross margins. McKinsey also reported that in 2015 corporations paid $59 billion for US regulatory infractions, $59 billion that those corporations could have used for other purposes. ​

United Data Governance

Unified data governance is not just critical for large companies — in fact, the earlier you are on your technology journey, the better positioned your business is to establish best practices and infrastructure that can scale into the future.

A unified data governance strategy will make sure that a business and its people can develop and deliver trusted data to the right users at the right time and in the right format. Being able to manage a business’s critical data assets can unleash opportunity within the business, reduce regulatory risk, improve business insights, and eliminate manual processes.

Why are we excited about UDG? Unified data governance lines up with Valence’s engineering and innovation strategy capabilities in perfect alignment. Analytics and reporting are in our DNA, and business-focused innovation is what gets us out of bed every day.

United data governance can break down data silos, improve data quality, lower data management costs, increase access for users, and reduce compliance costs and risks ​

Therefore, we’ve released a new service offering, Valence Unified Data Governance. We are bringing businesses into the data unification process, and currently see the potential for Microsoft Azure Purview to be a uniquely scalable and stable unified data governance technology. Valence is a Microsoft Gold Partner, and our relationship with Microsoft made it a no brainer for Valence to be among the first to market with an offering based on Purview. You can read the press release about this new offering here.

Should you be thinking about UDG?

While every modern business needs to address its data and governance, organizations in regulated industries are particularly prime for a UDG strategy. In addition to the common issues of manual data management, inconsistent reporting results, and disparate data sources, regulated industries have the added risk of compliance failures. Organizations in regulated industries are also likely to have high data volume, diverse data sources, data silos, ownership issues, incomplete data documentation, and data source fidelity. Regulated industries like law firms, healthcare organizations, and state/local governments have the most to gain by adopting UDG sooner rather than later.

Here’s what our own Steven Fiore thinks about UDG: “I’ve got years of experience working in state and local governments, and I know first-hand that smart and hardworking public servants are faced with tight budgets and challenging manual data management processes. Unified data governance is desperately needed in these organizations, and I feel personally excited to help people to find a better way.”

Here are four features of UDG that we are excited about:

  1. Unified data governance helps businesses understand what their sensitive data is, where it lives, and how it is being used.
  2. UDG also helps organizations understand what is and isn’t protected, compliance risks, and what their need is for additional safeguards such as encryption.
  3. UDG with Purview allows us to aggregate multiple data sources and connect certain types of information like social security and credit card numbers or employee IDs. With UDG, businesses can identify these different types of information and associate them with their data sources.
  4. One feature of UDG that seems so simple but can also be a game changer is that it also allows you to understand where the source data is in a report.

We work with clients to address their data and reporting using an array of technologies and techniques — the first step is to understand your data landscape, and then to develop a data governance roadmap. With the roadmap in place, your business can rapidly implement the UDG solution — and then experience the acceleration and opportunity that is made possible with a modern technical solution engineered and designed to better manage your data at scale.

JumpStart Your Success Today

Innovating technology is crucial, or your business will be left behind. Our expertise in technology and business helps our clients deliver tangible outcomes and accelerate growth. At Kopius, we’ve designed a program to JumpStart your customer, technology, and data success.

Kopius has an expert emerging tech team. We bring this expertise to your JumpStart program and help uncover innovative ideas and technologies supporting your business goals. We bring fresh perspectives while focusing on your current operations to ensure the greatest success.

Partner with Kopius and JumpStart your future success.


Related Services:

Additional Resources:


5G and the Next Decade of Digital Transformation


By: Jim Darrin, CEO

I am excited to announce today that Kopius has joined the 5G Open Innovation Lab (5G OI Lab) ecosystem as a Technical Partner. This is an incredibly important milestone for the company given our focus on the next decade of digital transformation. Since the beginning, Kopius has operated under the belief that yes, “software is eating the world” and that every company, enterprise, non-profit — you name it — will both feel the impact but more importantly be able to embrace this ongoing transformation. 5G and digital transformation go hand-in-hand.

Our three-prong thesis is simple: the trend of (1) increasingly capable cloud software platforms from Google, Microsoft, Amazon and more plus (2) improved and lower cost hardware platforms from robotics companies, VR headsets (we see you Oculus!), and more plus (3) always-on, high-speed internet access to every part of the physical world will create a cocktail of innovation opportunities like we have not seen before. And 5G is a critical ingredient. This is where we play as Kopius: in the middle of that mix is enormous opportunities for companies to build solutions, think of new business models, improve user experiences and more.

We have worked with T-Mobile for years and were thrilled earlier this year to hear about the 5G OI Lab. Founded by T-Mobile, Intel, and NASA, the 5G OI Lab is set up to be a global ecosystem of developers, start-ups, enterprises, academia and government institutions that bring engineering, technology and industry resources together. Kopius will now be a key part of this Lab as a Technical Partner to help companies think through and execute on projects and programs that take full advantage of 5G technologies and capabilities. And we couldn’t be happier about it.

JumpStart Your Success Today

Innovating technology is crucial, or your business will be left behind. Our expertise in technology and business helps our clients deliver tangible outcomes and accelerate growth. At Kopius, we’ve designed a program to JumpStart your customer, technology, and data success.

Kopius has an expert emerging tech team. We bring this expertise to your JumpStart program and help uncover innovative ideas and technologies supporting your business goals. We bring fresh perspectives while focusing on your current operations to ensure the greatest success.

Partner with Kopius and JumpStart your future success.

Digital Transformation: How to Get Started on Meaningful Innovation


By Steven Fiore

For the last few years “Digital Transformation” has been the buzzword du jour. Behind the phrase itself, was a fundamental desire for businesses to innovate in the digital age. There are plenty of statistics demonstrating why digital transformation demands meaningful innovation:

56% of CEOs say digital improvements have led to increased revenue.

Digitally mature companies are 23% more profitable than their less mature peers.

Digital-first companies are 64% more likely to achieve their business goals than their peers.

The problem is innovation can be confusing and disruptive, contributing to some more disturbing statistics like:

Of the $1.3 trillion spent on digital transformation in 2018, an estimated $900 billion was wasted when initiatives didn’t meet their goals.

70% of digital transformations fail, most often due to resistance from employees.

At Kopius, we have a tried and proven approach to helping our customers identify and drive meaningful innovation that maximizes the economic and customer experience gains while minimizing expense, risk, and disruption. You may be thinking, “that must be expensive and time consuming.” The reality is we typically deliver Innovation Workshops over a two-week period and our customers normally only need to be involved for about ten hours over that two week period. Maybe you’re thinking, “My organization is too big or too small to meaningfully innovate.” The truth is we’ve successfully helped everything from startups to the fortune 500 companies through this process. Innovation isn’t just for the very well-funded or very agile, it can make any organization more effective.

While there’s no magic to the process, there is a key ingredient to successful transformation initiatives — people. Experience has taught us that having the right people in the room can make all the difference in the world. Remember that statistic above about digital transformations failing due to resistance from employees? Part of why that happens is captured in an old adage: “people don’t resist change, people resist being changed.” If you can get key stakeholders from all impacted groups, from frontline workers to back office operations, not only do better ideas emerge, but also the participants in the workshop become evangelists for the recommended solutions. They can carry the message to their peers of how they were involved from the very beginning and how it will positively impact their group/team. We’ve even seen the workshops become team building experiences where groups that don’t normally work together start to see common ground between them, thus replacing silos with partnerships.

What else makes these workshops successful? Making sure the scenarios and solutions that emerge focus on quickly delivering business value. Kopius uses a structured framework to move from brainstorming ideas to drafting solutions based on those ideas to identifying the key value drivers (both financial and strategic) for each solution so that the solutions can be prioritized based on the highest value in the shortest time with the least complexity.

Once everyone agrees on the prioritized solutions, the Kopius team then drills down one more layer. This involves high level scoping for top priorities, whilst keeping in mind dependencies, organizational constraints, technical and organizational requirements, and best practices in technology adoption. This scoping is used to create a roadmap that clearly lays out the order, scope, level of effort, and timelines for implementing each of the top solutions. The roadmaps are also laid out in rapid, deliverable focused sprints such that incremental value can be realized as quickly as possible generating buy in, confidence, and support as each iteration builds on lessons learned in the last one. Equally important, each new iteration also increasingly proves out the value proposition originally promised.

Now Kopius is not just a workshop company, the deliverables from these workshops are not ivory tower theoretical exercises destined to gather digital dust. Our core competency is making our customers successful with cutting edge technologies like Augmented Reality, Virtual Reality, Robotics Process Automation, Machine Learning, Artificial Intelligence, the Internet of Things, Voice and Chat services, Big Data and Analytics, Blockchain, and more. We are always completely ready and more than happy to execute against the solutions on the roadmap — we’re also equally happy (ok, maybe slightly less happy) if our customers want to tackle some or all of the solutions internally or with other partners. Kopius’ goal is to contribute to our customers’ success, and make sure every deliverable meets or exceeds our customers’ expectations, whether that’s a workshop or a proof of concept or a pilot or a full-blown implementation.

JumpStart Your Success Today

Innovating technology is crucial, or your business will be left behind. Our expertise in technology and business helps our clients deliver tangible outcomes and accelerate growth. At Kopius, we’ve designed a program to JumpStart your customer, technology, and data success.

Kopius has an expert emerging tech team. We bring this expertise to your JumpStart program and help uncover innovative ideas and technologies supporting your business goals. We bring fresh perspectives while focusing on your current operations to ensure the greatest success.

Partner with Kopius and JumpStart your future success.

Additional Resources:

Remote Working Challenges and Mitigations in Product Deployment


With the dramatic shift for entire companies and industries to work remotely, we have identified some common challenges or pitfalls that may occur in product deployment. It’s important to realize the current situation is very different from team members working remotely occasionally or even part of a team working remotely. The undefined timeline of entirely remote workplace environments means we need to consider each aspect of the  development strategy from a new angle: from ideation, to agile work streams, to product deployment.

This article highlights some of the learnings we have found to be helpful at Valence and around the industry. Every product deployment situation is unique, so we recommend considering your team’s dynamics and working style.

Common Challenges Observed:

  • Communication and rapport can suffer when the entire office and our clients’ office is working remotely
  • Individual contributors may feel isolated
  • A lack of common working hours may lead to slower iteration cycles
  • Workload Balancing across team members may be more difficult to measure
  • Team members struggle to learn from each other as there is less opportunity to unofficially help each other
  • Culture can get fragmented and is likely to change as there is no common physical space
  • Best practices may begin to suffer or fragment as tribal knowledge becomes more siloed

Unfortunately, a whole slew of challenges emerges during entire-team remote work, but in this article, we will focus on challenges directly relating to product deployment and progress.

At Valence, we believe that Distributed Development and Remote Collaboration require a proactive approach to keep teams and organizations aligned.

Suggestions & Best Practices: Product Deployment and Progress

Communication and Rapport

Communication is more complicated when we are all physically scattered, and it is much easier to get out of “sync”. Non-verbal communication may be lacking, and communication tone may be misinterpreted. Gaining consensus and iteration cycles may take longer. Since face to face meetings may not be easily accessible, we recommend the following:

The Valence Approach:

  • Use tools that broadcast status: Tools like Slack or MS Teams can update your status based on your calendar. This is analogous to how people make quick decisions to initiate impromptu conversations while you are sitting next to them in a cubicle. Add a profile picture in applications like Slack and Teams especially for times when your video is not live during remote meetings.
  • Link multiple communication platforms together: In today’s world, most of us have a handful of email addresses and different inboxes, multiple Slack workspaces and channels, an MS Teams account and a variety of back-up drives. Having a multitude of tools for someone to reach you, typically results in a communication failure. A simple way to streamline is to connect various services in Slack or Teams. Identify a single place where these communications reach you. The biggest return on an IT investment will come from connecting communications to the tool used most.

At Valence, we use a collection of tools. We typically align our tool choices to those of the clients needs, and we have found the following tools useful in our daily communications and digital work.

– Slack, MS Teams, Office365, PowerBI, Jira, Confluence, GitHub, Microsoft DevOps, Microsoft Power Automatic, Google Suite, Zoom, WebEx

Empower team members to share communication preferences:

  • Are after hours messages OK?
  • Prefer Slack, Teams or email?
  • Common times for communicating?
  • Turn-around time for messages?
Image by DarkWorkX from Pixabay

Use different tools for different types of conversations:

Use Video on Conference Calls: It can be helpful to begin a conference call with camera video on to maintain rapport amongst teammates (or with clients!). Video helps in establishing comfort, as well as provides body language cues from employee to manager. Managers can lead the way to establishing this as a common practice by ensuring their video is active.

Slower Iteration Cycles

Teams may struggle to get feedback in a timely manner which in turn may slow development or degrade delivery quality. Preliminary research shows that coding teams are struggling in cycle times (according to LinearB, cycle times are up 45% — a significant increase). Slower email replies or a lagging internet connection may get in the way— but we have some suggestions to support iteration cycles and keep them as agile as possible.

The Valence Approach:
  • PMs should create and leverage “Feedback Needed” Slack or Teams channel for quick feedback conversations
  • Shared Slack channels with the client can help tremendously with soliciting client feedback and speed up iteration time; these can be official or unofficial and will still be beneficial!
  • A blend of asynchronous stand-ups by posting on a Slack channel and synchronous stand-ups over video call can keep communication flowing while needing to wait for another person to be available.
  • Remember to avoid a “dump truck” methodology, where the majority of changes occur at the end of the milestone or sprint. This results in heavy lifting to get feedback. Provide transparency and iterate along the way, using these tips, and it will be much easier and efficient for your deployment.

Balancing Workload / Productivity

A sudden shift in operational paradigms is likely to put an additional effort on some individuals more than others. Typically, the individuals that find themselves working harder are the ones that facilitate communications and manage the leadership of the teams. This is because communication is less impromptu and more intentional.

The Valence Approach: Plan, Plan, Plan for Meetings and Prepare your Audience:
  • The goal for most meetings should be to make decisions.
  • Proposals, agendas and topics should be made ahead of time in a document form; all decision makers should review it ahead of time such that the meeting time is spent wisely.
  • Leverage meeting time to discuss specific material on which a decision needs to be made, or any clarifications required to make a decision. As a benefit of this step, your meeting times will be significantly reduced and will provide your team with more flexibility.

Individual Isolation

Research from LinearB shows us that “92% of dev teams are writing more code since working-from-home” since theoretically employees can slot out uninterrupted blocks of time to focus on their work. However, this does not devalue the loss of team-connection that simultaneously increases. Without a physical space to travel to, employees may feel isolation and a lack of connection to the team(s). In an office space, there’s opportunity to move around, swing by a colleague’s desk, or catch up over a cup of coffee. In a remote working environment, it is tough to replicate that experience and feeling of community. We have a few recommendations that have worked well for our teams:

Common Best Practices and Tips:

  • It can be okay for employees and teammates to take certain types of calls while on a walk. For example, 1 on 1 calls don’t always need to be at the computer and notes can be jotted down on a phone too.
  • Managers can create an “open mic” call where any team members can jump in and just leave their mic on throughout the day. This effectively simulates an open workspace, to allow that “swinging by” feeling.
  • Create a common shared music streaming channel or playlists.
  • Share personal moments and experiences at the beginning of calls to create a sense of connection.

At Kopius, we’ve piloted:

  • Monthly happy hours and coffee hours
  • Meet new teammate lunches, led by management and attended by the CEO
  • All hands meetings, complete with dinner delivery coupons (think GrubHub) for employees to grab a meal on the company
  • Virtual onboarding with a delivery person to provide hardware delivery of essential office items

Effective Digital Transformation


The phrase Digital Transformation is commonly used today, referring to everything from an overhaul of a legacy system to leveraging online systems to engage customers. As champions of digital transformation, our team  believes in the power of smartly planned and efficiently executed digital transformations to enhance business strategy; we believe that effective digital transformation is a cornerstone of business, and it is imperative that individuals understand the definition, potential impact, and processes that lead to success.

Effective Digital Transformation: How do we think about it?

Effective digital transformation puts business strategy ahead of digital strategy, whilst interweaving the two. Successful digital transformation solves business problems by focusing on the customer — for example, by decreasing costs, or increasing value — and using technology solutions that cut through business functions, industries, processes to affect change. In short, technology is a means to an end.

Digital transformation may help reduce product costs, but what does that do for the business? It provides resources to be routed into other aspects of the business. Leverage those freed up resources to enhance the customer experience and you are left with improved margins and happier customers and an effective digital transformation.

Consider Amazon — a company that digitally transformed its business of book selling to a Big 4 technology company. Amazon leveraged digital transformation initiatives to change its supply chain and operational efficiency in order to provide a better customer experience. Their culture (the world-famous 14 Leadership Principles) and business strategy are interwoven to focus on the customer: Amazon Prime has some of the fastest delivery options in the market and Amazon Web Services provides some of the best cloud solutions for enterprises. They digitally transformed their business and now provide customers with digital solutions to digitally transform theirs. From their website: “Amazonians… share a common desire to always be learning and inventing on behalf of our customers.” Leverage culture and technology to improve customer experience; digitally transform the business to help the customer.

Digital Transformation contains components of digital strategy, the use of digitalization, as well as digitization efforts. These terms, often thrown around interchangeably, are in fact pieces of the larger puzzle rather than equal to the overall process. Digitization is the process of moving from analog to digital, pen and paper to Microsoft Excel. Digitalization, according to Gartner, speaks of the use of digital strategies, technologies, initiatives to tap into new business opportunities or change a business model. If anything, one leverages digitization to digitalize, and the overall transformation of a business from one to another, becomes digital transformation. The definitions are debated and often vague, as discussed by Jason Bloomberg in this Forbes article. It is important to remain consistent in thinking of digital transformation as the overarching umbrella of strategic digital initiatives to improve the business with the customer at the forefront.

Digital Transformation: Consider “The Process” towards success

What does Digital Transformation success entail? What does it look like?

As enterprises restructure their strategy to evolve amid a changing technological and economic landscape while centering around the customer, it is important to consider the process and what it takes to succeed.

Key Stages to Success

According to Keller and Price in Beyond Performance: How Great Organizations Build Ultimate Competitive Advantage, successful transformation involves a few key stages — from goal defining, to organizational assessment, to designing and initiating transformation and sustaining it. It is critical to understand where the enterprise is and where it wants to go — and it is critical to be consistent and practical.

Ensuring Success

to move forward with a transformation initiative, it is imperative to align Keller and Price’s stages with McKinsey’s 5 themes to a successful digital transformation, which involve digitization to prepare an enterprise for digitalization:

  • Having the right, digital-savvy leaders in place
  • Building capabilities for the workforce of the future
  • Empowering people to work in new ways
  • Giving day-to-day tools a digital upgrade
  • Communicating frequently via traditional and digital methods

Think about the Amazon example again — they didn’t just leverage digital solutions to overhaul their business; they leveraged cultural practices to ensure that Amazonians are driven towards the integration of technology and customer centricity. McKinsey’s themes encompass a similar outlook: empowerment, communications, capabilities, leadership — core cultural understandings that can support a digital transformation initiative.

At Valence, we focus heavily on thinking about the future. It is critical to be ever ready for tomorrow, whether it means continuous learning, or building systems and solutions to prepare for what is next. These stages and themes will ensure enterprises are thinking about the next step, focusing on being proactive rather than reactive. At this important juncture of the 21st century, where we have crossed into a new decade and face the challenge of economic reinvention due to a global pandemic, it matters how we use technology to transform our enterprises to meet changing customer needs.

In summary, as stated by Jim Darrin, CEO of Valence,

“No industry or company can ignore the importance or impact of Digital Transformation, and must embrace a digital strategy in order to evolve into the next generation.”

Leverage Kopius and JumpStart Success

At Kopius, we’ve designed a program to JumpStart your customer, technology, and data success.

Our JumpStart program fast-tracks business results and platform solutions. Connect with us today to enhance your customer satisfaction through a data-driven approach, drive innovation through emerging technologies, and achieve competitive advantage.

Add our brainpower to your operation by contacting our team to JumpStart your business.

Additional Resources:

Digital Transformation Trends Shaping 2020


By Renee Christensen.

Let’s talk about digital transformation trends!

Now that we are firmly immersed in 2020, what digital technology trends are gaining traction as necessities to future success? With businesses facing many external challenges, every company can use an edge in understanding where to invest their resources for the future. The elephant in the room to address is obviously the economic impact of COVID-19 but there are other challenges to keep in mind as well such as the compounding impacts of climate change, the re-hashing of global trade agreements, and the growing consumer-income gap.

Daniel Newman, a contributor to Forbes Magazine, recently called out several trends emerging in the last few years which have reached their tipping point to the mainstream. He notes that while AR/VR, IoT, cloud and edge computing “will continue to be foundational to our collective digital transformation journey”, below are several newer trends which are poised to have a significant impact in 2020 and beyond. I want to highlight three of these trends, as well as touch on AR/VR, since our team has seen concrete evidence of these taking hold and delivering real value to the bottom line:

Robust data analytics and digital privacy will be critical to future success. A business must know their customers better than their competitors to retain and delight them. When dealing with customer data, it must make data protection an imperative priority to maintain customer trust. Kopius sees this through an ever-increasing number of clients who need to more effectively and securely capture, process and digest vast amounts of data. Using AI and machine learning, our team recently built a field analytics engine for a commercial drone company to harness their immense amount of graphical data, resulting in a decrease in processing time from 72 hours to 30 minutes. Another client came to us with over 1 billion lines of data that we were able to capture and streamline into a sophisticated, yet easy to use visual dashboard, taking an overwhelming data set and making it immediately digestible to drive actionable business insights.

The next trend — conversational AI (voice & chat) technology — provides the ability for businesses to meet their customer in the most natural and efficient way possible, via speech. This technology is rapidly evolving and we will be seeing much more of it as quality exponentially improves. Here at Kopius, we built our own chatbot to enable employees to easily and naturally find company information just by asking Alexa. We have been able to apply the same technology for a Fortune 100 client in the retail environment during the last holiday season where the chatbot engaged with hundreds of customers.

Lastly, augmented and virtual reality are moving beyond the gaming industry as new products are becoming fully mobile (untethered) and less expensive. Rich new content is being developed to enhance learning, provide training and enable discovery across enterprises, health care, retail and higher learning institutions. This immersive modality can deliver high-touch experiences to broad audiences remotely, greatly enhancing reach and reducing cost. Currently, with self-isolation measures happening all over the world and more employees working remotely, AR/VR implementations are likely to experience a spike. We recently had the chance to deliver a virtual reality facility tour which allowed a large manufacturer to train employees from anywhere in the world on their product and processes without incurring any travel expenses. Additionally, we see more retailers embracing this technology and looking to build unique, highly engaging immersive experiences for their customers.

All in all, these are exciting digital transformation trends that we see gaining significant momentum as game changers and strong investment choices across industries in 2020 and beyond.

JumpStart Your Digital Transformation With Kopius

At Kopius, we’ve designed a program to JumpStart your customer, technology, and data success.

Our JumpStart program fast-tracks business results and platform solutions. Connect with us today to enhance your customer satisfaction through a data-driven approach, drive innovation through emerging technologies, and achieve competitive advantage.Add our brainpower to your operation by contacting our team to JumpStart your business.

Additional Resources: