Digital Transformation Trends that Future-Proof Your Business


The core of future-proofing your business lies in the incorporation of cutting-edge technological trends and strategic digitization of your business operations. Combining new, transformative solutions with tried-and-true business methods is not only a practical approach but an essential one when competing in this digital age. Using the latest digital transformation trends as your guide, start envisioning the journey of future-proofing your business in order to unlock the opportunities of tomorrow. 

#1 Personalization  

The importance of personalized customer experiences should not be understated. More than ever, consumers are faced with endless options. To stand out from competitors, businesses must use data and customer behavior insights to curate tailored and dynamic customer journeys that both delight and command their audience. Analyze purchasing history, demographics, web activity, and other data to understand your customer, as well as their likes and dislikes. Use these insights to design customized customer experiences that increase conversion, retention, and ultimately, satisfaction.  

#2 Artificial Intelligence  

AI is everywhere. From autonomous vehicles and smart homes to digital assistants and chatbots, artificial intelligence is being used in a wide array of applications to improve, simplify, and speed up the tasks of everyday life. For businesses, AI and machine learning have the power to extract and decipher large amounts of data that can help predict trends and forecasts, deliver interactive personalized customer experiences, and streamline operational processes. Companies that lean on AI-driven decisions are propelled into a world of efficiency, precision, automation, and competitiveness.  

#3 Sustainability 

Enterprises, particularly those in the manufacturing industry, face increasing pressure to act more responsibly and consider environmental, social, and corporate governance (ESG) goals when making business decisions. Digital transformations are one way to support internal sustainable development because they lead to reduced waste, optimized resource use, and improved transparency. With sustainability in mind, businesses can build their data and technology infrastructures to reduce impact. For example, companies can switch to more energy-efficient hardware or decrease electricity consumption by migrating to the cloud.  

#4 Cloud Migration 

More and more companies are migrating their data from on-premises to the cloud. In fact, by 2027, it is estimated that 50% of all enterprises will use cloud services1. What is the reason behind this massive transition? Cost saving is one of the biggest factors. Leveraging cloud storage platforms eliminates the need for expensive data centers and server hardware, thereby reducing major infrastructure expenditures. And while navigating a cloud migration project can seem challenging, many turn to cloud computing partners to lead the data migration and ensure a painless shift.  

Future-Proof Your Business Through Digital Transformation with Kopius

By embracing these digital transformation trends, your company is not only adapting to the current business landscape but also unlocking new opportunities for growth. Future-proofing your business requires a combination of strategic acumen and technical expertise. This is precisely where a digital transformation partner, who possesses an intimate grasp of these trends, can equip your business with the resources and solutions to confidently evolve. Reach out to Kopius today and let’s discuss a transformational journey that will future-proof your business for the digital future.  

A Step-By-Step Guide to Customer Experience Personalization


Winning the interest and loyalty of customers means more than just offering a superior product or service. The secret lies in a powerful strategy called personalization – a dynamic approach that tailors the customer experience to meet individual needs and preferences. As businesses across industries strive to create lasting connections with their customers and meet their evolving expectations, the importance of personalization in the customer experience should not be overstated. Read on to explore the compelling case for customer personalization and a step-by-step guide on how your business can embark on this journey to elevate the customer experience. 

Let’s face it, generic offerings are outdated. Today, customers yearn for something more; they want an experience that resonates with their unique tastes. Personalization is the magic ingredient that taps into this desire. By tailoring products, services, and interactions to individual preferences, businesses create a sense of connection that fosters lasting loyalty. And beyond that, research from McKinsey found that companies who implemented a personalization strategy generated 40% more revenue than their counterparts who placed less emphasis on this approach. All signs point to tailored customer journeys.  

Data lies at the heart of personalization, offering insights into customer behaviors. More than ever, companies have access to a wealth of customer information, such as past purchases and browsing habits, that act as the building blocks to these insights. Leveraging advanced analytics and artificial intelligence, businesses can uncover valuable patterns and trends, guiding them to craft personalized experiences for their customers. 

Building a successful personalization strategy requires thoughtful consideration and calculated execution. If you are just getting started, follow these steps to build an improved and tailored customer experience that will drive remarkable results for your business:

Step 1: Gather as Much Customer Data as Possible.

At the core of every successful personalization strategy lies a deep understanding of your customers. To lay this solid foundation, start by gathering valuable data from multiple touchpoints along their journey, including website interactions, purchase history, and customer feedback. Take advantage of powerful tools like customer relationship management (CRM) software, website analytics, and social media insights to gain a holistic view of your customers’ preferences, behaviors, and pain points.

Step 2: Divide Your Customers Into Audience Segments.

With an abundance of data at your fingertips, it is time to move on to segmentation. Divide your customers into distinct groups based on shared traits like demographics, purchase behavior, and interests. Audience segmentation empowers you to personalize your messaging or offerings, address individual customer needs with accuracy, and create a sense of relevance.

Step 3: Get Personal With Your Messaging.

Now that you have completed the segmentation process, it’s time to get personal! Start by creating interesting content with tailored product recommendations, and design exclusive offers that cater specifically to the unique preferences of each of your audience segments. By doing so, you will create truly personalized experiences that captivate your audience and leave an impression.

Step 4: Automate Dynamic Content Delivery. 

Offer real-time digital experiences that resonate with your customers’ interests and past interactions. Embracing innovative technologies like artificial intelligence allows you to analyze customer data, predict behavior, and implement an effective personalization strategy that delivers tailored experiences on the fly. AI-powered chatbots take personalized support a step further, offering instant assistance to resolve customer concerns and boost overall customer satisfaction levels.

Step 5: Track Your Personalization Campaigns. 

Monitor the impact of your personalization strategy on customer engagement, satisfaction, and business performance. Evaluate key metrics like conversion rates and customer retention to assess their effectiveness. Utilize any insights gained to identify areas for improvement and modify your approach accordingly. 

The possibilities for designing a personalized digital experience are limitless. AI-powered chatbots provide real-time personalized support, making customers feel valued and cared for. Dynamic content delivery ensures website experiences are based on individual preferences. Personalization will enrich the customer journey, increasing engagement and conversion rates. If you are ready to deliver personalized experiences, Kopius is here to help. Let’s team up to create extraordinary customer experiences for your business! 

5 Industries Winning at Artificial Intelligence


By Lindsay Cox

Augmented Intelligence (AI) and Machine Learning (ML) were already the technologies on everyone’s radar when the year started, and the release of Foundation Models like ChatGPT only increased the excitement about the ways that data technology can change our lives and our businesses. We are excited about these five industries that are winning at artificial intelligence.

As an organization, data and AI projects are right in our sweet spot. ChatGPT is very much in the news right now (and is a super cool tool – you can check it out here if you haven’t already).

I also enjoyed watching Watson play Jeopardy as a former IBMer 😊

There are a few real-world examples of how five organizations are winning at AI. We have included those use cases along with examples where our clients have been leading the way on AI-related projects.

You can find more case studies about digital transformation, data, and software application development in our Case Studies section of the website.

Consumer brands: Visualizing made easy

Brands are helping customers to visualize the outcome of their products or services using computer vision and AI. Consumers can virtually try on a new pair of glasses, a new haircut, or a fresh outfit, for example.  AI can also be used to visualize a remodeled bathroom or backyard.

We helped a teledentistry, web-first brand develop a solution using computer vision to show a customer how their smile would look after potential treatment. We paired the computer vision solution with a mobile web application so customers could “see their new selfie.” 

Consumer questions can be resolved faster and more accurately

Customer service can make or break customer loyalty, which is why chatbots and virtual assistants are being deployed at scale to reduce average handle time average speed-of-answer, and increase first-call resolutions.

We worked with a regional healthcare system to design and develop a “digital front door” to improve patient and provider experiences. The solution includes an interactive web search and chatbot functionality. By getting answers to patients and providers more quickly, the healthcare system is able to increase satisfaction and improve patient care and outcomes.

Finance: Preventing fraud

There’s a big opportunity for financial services organizations to use AI and deep learning solutions to recognize doubtful transactions and thwart credit card fraud which help reduce cost. Also known as anomaly detection, banks generate huge volumes of data which can be used to train machine learning models to flag fraudulent transactions.

Agriculture: Supporting ESG goals by operating more sustainably

Data technologies like computer vision can help organizations see things that humans miss. This can help with the climate crisis because it can include water waste, energy waste, and misdirected landfill waste.

The agritech industry is already harnessing data and AI since our food producers and farmers are under extreme pressure to produce more crops with less water. For example, John Deere created a robot called “See and Spray” that uses computer vision technology to monitor and spray weedicide on cotton plants in precise amounts.

We worked with PrecisionHawk to use computer vision combined with drone-based photography to analyze crops and fields to give growers precise information to better manage crops. The data produced through the computer vision project helped farmers to understand their needs and define strategies faster, which is critical in agriculture. (link to case study)

Healthcare: Identify and prevent disease

AI has an important role to play in healthcare, with uses ranging from patient call support to the diagnosis and treatment of patients.

For example, healthcare companies are creating clinical decision support systems that warn a physician in advance when a patient is at risk of having a heart attack or stroke adding critical time to their response window.

AI-supported e-learning is also helping to design learning pathways, personalized tutoring sessions, content analytics, targeted marketing, automatic grading, etc. AI has a role to play in addressing the critical healthcare training need in the wake of a healthcare worker shortage.

Artificial intelligence and machine learning are emerging as the most game-changing technologies at play right now. These are a few examples that highlight the broad use and benefits of data technologies across industries. The actual list of use cases and examples is infinite and expanding.

What needs to happen for your company to win at artificial intelligence? To learn more about Artificial Intelligence and Machine Learning, reach out to us today! Kopius is a leader in nearshore digital technology consulting and services.


Additional resources:


Addressing AI Bias – Four Critical Questions


By Hayley Pike

As AI becomes even more integrated into business, so does AI bias.

On February 2, 2023, Microsoft released a statement from Vice Chair & President Brad Smith about responsible AI. In the wake of the newfound influence of ChatGPT and Stable Diffusion, considering the history of racial bias in AI technologies is more important than ever.

The discussion around racial bias in AI has been going on for years, and with it, there have been signs of trouble. Google fired two of its researchers, Dr. Timnit Gebru and Dr. Margaret Mitchell after they published research papers outlining how Google’s language and facial recognition AI were biased against women of color. And speech recognition software from Amazon, Microsoft, Apple, Google, and IBM misidentified speech from Black people at a rate of 35%, compared to 19% of speech from White people.

In more recent news, DEI tech startup Textio analyzed ChatGPT showing how it skewed towards writing job postings for younger, male, White candidates- and the bias increased for prompts for more specific jobs.

If you are working on an AI product or project, you should take steps to address AI bias. Here are four important questions to help make your AI more inclusive:

  1. Have we incorporated ethical AI assessments into the production workflow from the beginning of the project? Microsoft’s Responsible AI resources include a project assessment guide.
  2. Are we ready to disclose our data source strengths and limitations? Artificial intelligence is as biased as the data sources it draws from. The project should disclose who the data is prioritizing and who it is excluding.
  3. Is our AI production team diverse? How have you accounted for the perspectives of people who will use your AI product that are not represented in the project team or tech industry?
  4. Have we listened to diverse AI experts? Dr. Joy Buolamwini and Dr. Inioluwa Deborah Raji, currently at the MIT Media Lab, are two black female researchers who are pioneers in the field of racial bias in AI.

Rediet Adebe is a computer scientist and co-founder of Black in AI. Adebe sums it up like this:

“AI research must also acknowledge that the problems we would like to solve are not purely technical, but rather interact with a complex world full of structural challenges and inequalities. It is therefore crucial that AI researchers collaborate closely with individuals who possess diverse training and domain expertise.”

To learn more about artificial intelligence and machine learning, reach out to us today! Kopius is a leader in nearshore digital technology consulting and services.


Additional resources:


ChatGPT and Foundation Models: The Future of AI-Assisted Workplace


By Yuri Brigance

The rise of generative models such as ChatGPT and Stable Diffusion has generated a lot of discourse about the future of work and the AI-assisted workplace. There is tremendous excitement about the awesome new capabilities such technology promises, as well as concerns over losing jobs to automation. Let’s look at where we are today, how we can leverage these new AI-generated text technologies to supercharge productivity, and what changes they may signal to a modern workplace.

Will ChatGPT Take Away Your Job?

That’s the question on everyone’s mind. AI can generate images, music, text, and code. Does this mean that your job as a designer, developer, or copywriter is about to be automated? Well, yes. Your job will be automated in the sense that it is about to become a lot more efficient, but you’ll still be in the driver’s seat.

First, not all automation is bad. Before personal computers became mainstream, taxes were completed with pen and paper. Did modern tax software put accountants out of business? Not at all. It made their job easier by automating repetitive, boring, and boilerplate tasks. Tax accountants are now more efficient than ever and can focus on mastering tax law rather than wasting hours pushing paper. They handle more complicated tax cases, those personalized and tailored to you or your business. Similarly, it’s fair to assume that these new generative AI tools will augment creative jobs and make them more efficient and enjoyable, not supplant them altogether.

Second, generative models are trained on human-created content. This ruffles many feathers, especially those in the creative industry whose art is being used as training data without the artist’s explicit permission, allowing the model to replicate their unique artistic style. Stability.ai plans to address this problem by enabling artists to opt out of having their work be part of the dataset, but realistically there is no way to guarantee compliance and no definitive way to prove whether your art is still being used to train models. But this does open interesting opportunities. What if you licensed your style to an AI company? If you are a successful artist and your work is in demand, there could be a future where you license your work to be used as training data and get paid any time a new image is generated based on your past creations. It is possible that responsible AI creators can calculate the level of gradient updates during training, and the percentage of neuron activation associated to specific samples of data to calculate how much of your licensed art was used by the model to generate an output. Just like Spotify pays a small fee to the musician every time someone plays one of their songs, or how websites like Flaticon.com pay a fee to the designer every time one of their icons is downloaded.  Long story short, it is likely that soon we’ll see more strict controls over how training datasets are constructed regarding licensed work vs public domain.

Let’s look at some positive implications of this AI-assisted workplace and technology as it relates to a few creative roles and how this technology can streamline certain tasks.

As a UI designer, when designing web and mobile interfaces you likely spend significant time searching for stock imagery. The images must be relevant to the business, have the right colors, allow for some space for text to be overlaid, etc. Some images may be obscure and difficult to find. Hours could be spent finding the perfect stock image. With AI, you can simply generate an image based on text prompts. You can ask the model to change the lighting and colors. Need to make room for a title? Use inpainting to clear an area of the image. Need to add a specific item to the image, like an ice cream cone? Show AI where you want it, and it’ll seamlessly blend it in. Need to look up complementary RGB/HEX color codes? Ask ChatGPT to generate some combinations for you.

Will this put photographers out of business? Most likely not. New devices continue to come out, and they need to be incorporated into the training data periodically. If we are clever about licensing such assets for training purposes, you might end up making more revenue than before, since AI can use a part of your image and pay you a partial fee for each request many times a day, rather than having one user buy one license at a time. Yes, work needs to be done to enable this functionality, so it is important to bring this up now and work toward a solution that benefits everyone. But generative models trained today will be woefully outdated in ten years, so the models will continue to require fresh human-generated real-world data to keep them relevant. AI companies will have a competitive edge if they can license high-quality datasets, and you never know which of your images the AI will use – you might even figure out which photos to take more of to maximize that revenue stream.

Software engineers, especially those in professional services frequently need to switch between multiple programming languages. Even on the same project, they might use Python, JavaScript / TypeScript, and Bash at the same time. It is difficult to context switch and remember all the peculiarities of a particular language’s syntax. How to efficiently do a for-loop in Python vs Bash? How to deploy a Cognito User Pool with a Lambda authorizer using AWS CDK? We end up Googling these snippets because working with this many languages forces us to remember high-level concepts rather than specific syntactic sugar. GitHub Gist exists for the sole purpose of offloading snippets of useful code from local memory (your brain) to external storage. With so much to learn, and things constantly evolving, it’s easier to be aware that a particular technique or algorithm exists (and where to look it up) rather than remember it in excruciating detail as if reciting a poem. Tools like ChatGPT integrated directly into the IDE would reduce the amount of time developers spend remembering how to create a new class in a language they haven’t used in a while, how to set up branching logic or build a script that moves a bunch of files to AWS S3. They could simply ask the IDE to fill in this boilerplate to move on to solving the more interesting algorithmic challenges.

An example of asking ChatGPT how to use Python decorators. The text and example code snippet is very informative.

For copywriters, it can be difficult to overcome the writer’s block of not knowing where to start or how to conclude an article. Sometimes it’s challenging to concisely describe a complicated concept. ChatGPT can be helpful in this regard, especially as a tool to quickly look up clarifying information about a topic. Though caution is justified as demonstrated recently by Stephen Wolfram, CEO of Wolfram Alpha who makes a compelling argument that ChatGPT’s answers should not always be taken at face value.. So doing your own research is key. That being the case, OpenAI’s model usually provides a good starting point at explaining a concept, and at the very least it can provide pointers for further research. But for now, writers should always verify their answers. Let’s also be reminded that ChatGPT has not been trained on any new information created after the year 2021, so it is not aware of new developments on the war in Ukraine, current inflation figures, or the recent fluctuations of the stock market, for example.

In Conclusion

Foundation models like ChatGPT and Stable Diffusion can augment and streamline workflows, and they are still far from being able to directly threaten a job. They are useful tools that are far more capable than narrowly focused deep learning models, and they require a degree of supervision and caution. Will these models become even better 5-10 years from now? Undoubtedly so. And by that time, we might just get used to them and have several years of experience working with these AI agents, including their quirks and bugs.

There is one important thing to take away about Foundation Models and the future of the AI-assisted workplace: today they are still very expensive to train. They are not connected to the internet and can’t consume information in real-time, in online incremental training mode. There is no database to load new data into, which means that to incorporate new knowledge, the dataset must grow to encapsulate recent information, and the model must be fine-tuned or re-trained from scratch on this larger dataset. It’s difficult to verify that the model outputs factually correct information since the training dataset is unlabeled and the training procedure is not fully supervised. There are interesting open source alternatives on the horizon (such as the U-Net-based StableDiffusion), and techniques to fine-tune portions of the larger model to a specific task at hand, but those are more narrowly focused, require a lot of tinkering with hyperparameters, and generally out of scope for this particular article.

It is difficult to predict exactly where foundation models will be in five years and how they will impact the AI-assisted workplace since the field of machine learning is rapidly evolving. However, it is likely that foundation models will continue to improve in terms of their accuracy and ability to handle more complex tasks. For now, though, it feels like we still have a bit of time before seriously worrying about losing our jobs to AI. We should take advantage of this opportunity to hold important conversations now to ensure that the future development of such systems maintains an ethical trajectory.

To learn more about our generative AI solutions, reach out to us today! Kopius is a leader in nearshore digital technology consulting and services.


Additional resources:


What Separates ChatGPT and Foundation Models from Regular AI Models?


By Yuri Brigance

This introduces what separates foundation models from regular AI models. We explore the reasons these models are difficult to train and how to understand them in the context of more traditional AI models.

chatGPT Foundation Model

What Are Foundation Models?

What are foundation models, and how are they different from traditional deep learning AI models? The Stanford Institute’s Center of Human-Centered AI defines a foundation model as “any model that is trained on broad data (generally using self-supervision at scale) that can be adapted to a wide range of downstream tasks”. This describes a lot of narrow AI models as well, such as MobileNets and ResNets – they too can be fine-tuned and adapted to different tasks.

The key distinctions here are “self-supervision at scale” and “wide range of tasks”.

Foundation models are trained on massive amounts of unlabeled/semi-labeled data, and the model contains orders of magnitude more trainable parameters than a typical deep learning model meant to run on a smartphone. This makes foundation models capable of generalizing to a much wider range of tasks than smaller models trained on domain-specific datasets. It is a common misconception that throwing lots of data at a model will suddenly make it do anything useful without further effort.  Actually, such large models are very good at finding and encoding intricate patterns in the data with little to no supervision – patterns which can be exploited in a variety of interesting ways, but a good amount of work needs to happen in order to use this learned hidden knowledge in a useful way.

The Architecture of AI Foundation Models

Unsupervised, semi-supervised, and transfer learning are not new concepts, and to a degree, foundation models fall into this category as well. These learning techniques trace their roots back to the early days of generative modeling such as Restricted Boltzmann Machines and Autoencoders. These simpler models consist of two parts: an encoder and a decoder. The goal of an autoencoder is to learn a compact representation (known as encoding or latent space) of the input data that captures the important features or characteristics of the data, aka “progressive linear separation” of the features that define the data. This encoding can then be used to reconstruct the original input data or generate entirely new synthetic data by feeding cleverly modified latent variables into the decoder.

An example of a convolutional image autoencoder model architecture is trained to reconstruct its own input, ex: images. Intelligently modifying the latent space allows us to generate entirely new images. One can expand this by adding an extra model that encodes text prompts into latent representations understood by the decoder to enable text-to-image functionality.

Many modern ML models use this architecture, and the encoder portion is sometimes referred to as the backbone with the decoder being referred to as the head. Sometimes the models are symmetrical, but frequently they are not. Many model architectures can serve as the encoder or backbone, and the model’s output can be tailored to a specific problem by modifying the decoder or head. There is no limit to how many heads a model can have, or how many encoders. Backbones, heads, encoders, decoders, and other such higher-level abstractions are modules or blocks built using multiple lower-level linear, convolutional, and other types of basic neural network layers. We can swap and combine them to produce different tailor-fit model architectures, just like we use different third-party frameworks and libraries in traditional software development. This, for example, allows us to encode a phrase into a latent vector which can then be decoded into an image.

Foundation Models for Natural Language Processing

Modern Natural Language Processing (NLP) models like ChatGPT fall into the category of Transformers. The transformer concept was introduced in the 2017 paper “Attention Is All You Need” by Vaswani et al. and has since become the basis for many state-of-the-art models in NLP. The key innovation of the transformer model is the use of self-attention mechanisms, which allow the model to weigh the importance of different parts of the input when making predictions. These models make use of something called an “embedding”, which is a mathematical representation of a discrete input, such as a word, a character, or an image patch, in a continuous, high-dimensional space. Embeddings are used as input to the self-attention mechanisms and other layers in the transformer model to perform the specific task at hand, such as language translation or text summarization. ChatGPT isn’t the first, nor the only transformer model around. In fact, transformers have been successfully applied in many other domains such as computer vision and sound processing.

So if ChatGPT is built on top of existing concepts, what makes it so different from all the other state-of-the-art model architectures already in use today? A simplified explanation of what distinguishes a foundation model from a “regular” deep learning model is the immense scale of the training dataset as well as the number of trainable parameters that a foundation model has over a traditional generative model. An exceptionally large neural network trained on a truly massive dataset gives the resulting model the ability to generalize to a wider range of use cases than its more narrowly focused brethren, hence serving as a foundation for an untold number of new tasks and applications. Such a large model encodes many useful patterns, features, and relationships in its training data. We can mine this body of knowledge without necessarily re-training the entire encoder portion of the model. We can attach different new heads and use transfer learning and fine-tuning techniques to adapt the same model to different tasks. This is how just one model (like Stable Diffusion) can perform text-to-image, image-to-image, inpainting, super-resolution, and even music generation tasks all at once.

Challenges in Training Foundation Models

The GPU computing power and human resources required to train a foundation model like GPT from scratch dwarf those available to individual developers and small teams. The models are simply too large, and the dataset is too unwieldy. Such models cannot (as of now) be cost-effectively trained end-to-end and iterated using commodity hardware.

Although the concepts may be well explained by published research and understood by many data scientists, the engineering skills and eye-watering costs required to wire up hundreds of GPU nodes for months at a time would stretch the budgets of most organizations. And that’s ignoring the costs of dataset access, storage, and data transfer associated with feeding the model massive quantities of training samples.

There are several reasons why models like ChatGPT are currently out of reach for individuals to train:

  1. Data requirements: Training a large language model like ChatGPT requires a massive amount of text data. This data must be high-quality and diverse and is typically obtained from a variety of sources such as books, articles, and websites. This data is also preprocessed to get the best performance, which is an additional task that requires knowledge and expertise. Storage, data transfer, and data loading costs are substantially higher than what is used for more narrowly focused models.
  2. Computational resources: ChatGPT requires significant computational resources to train. This includes networked clusters of powerful GPUs, and a large amount of memory volatile and non-volatile. Running such a computer cluster can easily reach hundreds of thousands per experiment.
  3. Training time: Training a foundation model can take several weeks or even months, depending on the computational resources available. Wiring up and renting this many resources requires a lot of skill and a generous time commitment, not to mention associated cloud computing costs.
  4. Expertise: Getting a training run to complete successfully requires knowledge of machine learning, natural language processing, data engineering, cloud infrastructure, networking, and more. Such a large cross-disciplinary set of skills is not something that can be easily picked up by most individuals.

Accessing Pre-Trained AI Models

That said, there are pre-trained models available, and some can be fine-tuned with a smaller amount of data and resources for a more specific and narrower set of tasks, which is a more accessible option for individuals and smaller organizations.

Stable Diffusion took $600k to train – the equivalent of 150K GPU hours. That is a cluster of 256 GPUs running 24/7 for nearly a month.  Stable Diffusion is considered a cost reduction compared to GPT. So, while it is indeed possible to train your own foundation model using commercial cloud providers like AWS, GCP, or Azure, the time, effort, required expertise, and overall cost of each iteration impose limitations on their use. There are many workarounds and techniques to re-purpose and partially re-train these models, but for now, if you want to train your own foundation model from scratch your best bet is to apply to one of the few companies which have access to resources necessary to support such an endeavor.

Contact Us for AI Services

If you are ready to leverage artificial intelligence and machine learning solutions, reach out to us today! Kopius is a leader in nearshore digital technology consulting and services.


Additional resources:


Data Trends: Six Ways Data Will Change Business in 2023 and Beyond


By Kristina Scott

Data is big and getting bigger. We’ve tracked six major data-driven trends for the coming year.

Digital analytics data visualization, financial schedule, monitor screen in perspective

Data is one of the fastest-growing and most innovative opportunities today to shape the way we work and lead. IDC predicts that by 2024, the inability to perform data- and AI-driven strategy will negatively affect 75% of the world’s largest public companies. And by 2025, 50% of those companies will promote data-informed decision-making by embedding analytics in their enterprise software (up from 33% in 2022), boosting demand for more data solutions and data-savvy employees.

Here is how data trends will shift in 2023 and beyond:

  1. Data Democratization Drives Data Culture

If you think data is only relevant to analysts with advanced knowledge of data science, we’ve got news for you.  Data democratization is one of the most important trends in data. Gartner research forecasts that 80% of data-driven initiatives that are focused on business outcomes will become essential business functions by 2025.

Organizations are creating a data culture by attracting data-savvy talent and promoting data use and education for employees at all levels. To support data democratization, data must be exact, easily digestible, and accessible.

Research by McKinsey found that high-performing companies have a data leader in the C-suite and make data and self-service tools universally accessible to frontline employees.

2. Hyper-Automation and Real-Time Data Lower Costs

Real-time data and its automation will be the most valuable big data tools for businesses in the coming years. Gartner forecasts that by 2024, rapid hyper-automation will allow organizations to lower operational costs by 30%. And by 2025, the market for hyper-automation software will hit nearly $860 billion.

3. Artificial Intelligence and Machine Learning (AI & ML) Continue to Revolutionize Operations

The ability to implement AI and ML in operations will be a significant differentiator. Verta Insights found that industry leaders that outperform their peers financially, are more than 2x as likely to ship AI projects, products, or features, and have made AI/ML investments at a higher level than their peers.

AI and ML technologies will boost the Natural Language Processing (NLP) market. NLP enables machines to understand and communicate with us in spoken and written human languages. The NLP market size will grow from $15.7 billion in 2022 to $49.4 billion by 2027, according to research from MarketsandMarkets.

We have seen the wave of interest in OpenAI’s ChatGPT, a conversational language-generation software. This highly-scalable technology could revolutionize a range of use cases— from summarizing changes to legal documents to completely changing how we research information through dialogue-like interactions, says CNBC.

This can have implications in many industries. For example, the healthcare sector already employs AI for diagnosis and treatment recommendations, patient engagement, and administrative tasks. 

4. Data Architecture Leads to Modernization

Data architecture accelerates digital transformation because it solves complex data problems through the automation of baseline data processes, increases data quality, and minimizes silos and manual errors. Companies modernize by leaning on data architecture to connect data across platforms and users. Companies will adopt new software, streamline operations, find better ways to use data, and discover new technological needs.

According to MuleSoft, organizations are ready to automate decision-making, dynamically improve data usage, and cut data management efforts by up to 70% by embedding real-time analytics in their data architecture.

5. Multi-Cloud Solutions Optimize Data Storage

Cloud use is accelerating. Companies will increasingly opt for a hybrid cloud, which combines the best aspects of private and public clouds.

Companies can access data collected by third-party cloud services, which reduces the need to build custom data collection and storage systems, which are often complex and expensive.

In the Flexera State of Cloud Report, 89% of respondents have a multi-cloud strategy, and 80% are taking a hybrid approach.

6. Enhanced Data Governance and Regulation Protect Users

Effective data governance will become the foundation for impactful and valuable data. 

As more countries introduce laws to regulate the use of various types of data, data governance comes to the forefront of data practices. European GDPR, Canadian PIPEDA, and Chinese PIPL won’t be the last laws that are introduced to protect citizen data.

Gartner has predicted that by 2023, 65% of the world’s population will be covered by regulations like GDPR. In turn, users will be more likely to trust companies with their data if they know it is more regulated.

Valence works with clients to implement a governance framework, find sources of data and data risk, and activate the organization around this innovative approach to data and process governance, including education, training, and process development. Learn more.

What these data trends add up to

As we step into 2023, organizations that understand current data trends can harness data to become more innovative, strategic, and adaptable. Our team helps clients with data assessments, by designing and structuring data assets, and by building modern data management solutions. We strategically integrate data into client businesses, use machine learning and artificial intelligence to create proactive insights, and create data visualizations and dashboards to make data meaningful.  

We help clients to develop a solution and create a modern data architecture that supports differentiated, cloud-enabled scalability, self-service capability, and faster time-to-market for new data products and solutions. Learn more.

Additional resources:


Retail Technology and Innovation – a Conversation with Michael Guzzetta


We recently spent some time with Michael Guzzetta, a seasoned retail technology and innovation executive and consultant who has worked with brands such as The Walt Disney Company, Microsoft, See’s Candies, and H-E-B.

Tell me about your background. What brought you to retail?

Like many people, I launched my retail career in high school when I worked in the men’s department at Robinson’s May. I also worked for The Warehouse (music retailer) and was a CSR at Blockbuster video – strangely, I still miss the satisfaction of organizing tapes on shelves.

I ignited my tech career in 2001 when I started working in payment processing and cloud-based tech, and then I returned to retail in 2009 when I joined Disney Store North America, one of the world’s strongest retail brands.

During my tenure at Disney, I had the privilege of working at the intersection of creative, marketing, and mobile/digital innovation. And this is where the innovation bug bit me and kicked off my decades-long work on omnichannel innovation projects. I seek opportunities to test and deploy in-store technology to simplify experiences for customers and employees, increase sales, and drive demand. Since jump-starting this journey at Disney Store, I’ve also helped See’s Candies, Microsoft, and H-E-B to advance their digital transformation through retail innovation.

What are some of the retail technologies that got you started?

I’ve seen it all! I’ve re-platformed eCommerce sites, deployed beacons and push notifications, deployed in-store traffic counting, worked on warehouse efficiency, automated and integrated buyer journeys and omnichannel programs, and more. I recently built a 20k SF innovation lab space to run proofs-of-concept to validate tech, test, and deployment in live environments. Smart checkout, supply chain, inventory management, eCommerce… you name it.

What are the biggest innovation challenges in retail today?

Some questions that keep certain retailers up at night are, “How can we simplify the shopping experience for customers and make it easier for them to check out?”, “How can we optimize our supply chain and inventory operations?”, “How can we improve accuracy for customers shopping online and reduce substitutions and shorts in fulfillment?” and “How can we make it easier and more efficient for personal shoppers to shop curbside and home delivery orders?” Not to mention, “What is the future of retail, and which technologies can help us stay competitive?”

I see potential in several trends to address those challenges, but my top three are:

Artificial Intelligence/Machine Learning – AI will continue to revolutionize retail. It’s permeated most of the technology we use today, whether it’s SAAS or hardware, like smart self-checkout. You can use AI, computer vision, and machine learning to identify products and immediately put them in your basket. AI is embedded in our everyday lives – it powers the smart assistants we use daily, monitors our social media activity, helps us book our travel, and runs self-driving cars, among dozens of other applications. And as a subset of AI, Machine Learning allows models to continue learning and improving, further advancing AI capabilities. I could go on but suffice it to say that the retailer that nails AI first wins.

Computer vision. Computer vision has a sizable opportunity to solve inventory issues, especially for grocery brands. Today, there’s a gap between online inventory and what’s on the shelf since the inventory system can’t keep pace with what’s stocked and on the shelves for personal shoppers, which is frustrating for customers who don’t expect substitutions or out-of-stock deliveries. With the advent of computer vision cameras, you can combine those differences and see what is on the shelf in real-time to inform what is available online accurately. Computer vision-supported inventory management will be vital to creating a truly omnichannel experience. Computer vision also enables smart shopping carts, self-checkout kiosks, loss prevention, and theft prevention. Not to mention Amazon’s use of CV cameras with their Just Walk Out tech in Amazon Go, Amazon Fresh, and specific Whole Foods locations. It has endless applications for retail and gives you the eyes online that you can’t get in stores today.

Robotics. In the last five years, robotics has taken a seismic leap, and a shift has happened, which you can see in massive, automated fulfillment centers like those operated by Amazon, Kroger, and Walmart. A brand can deliver groceries in a region without having a physical store, thanks to robotic fulfillment centers and distribution centers. It’s a game-changer. Robotics has many functions beyond fulfillment in retail, but this application truly stands out.

What is a missed opportunity that more retail brands should take advantage of?

Data. Data is huge, and its importance can’t be understated. It’s a big, missed opportunity for retailers today. Improving data management, governance, and sanitation is a massive opportunity for retailers that want to innovate.

Key opportunity areas around data in retail include customer experience (know your customer), understanding trends related to customer buying habits, and innovation. You can’t innovate at any speed with dirty data.

There’s a massive digital transformation revolution underway among retailers, and they are trying to innovate with data, but they have so much data that it can be overwhelming. They are trying to create data lakes, a single source of truth, and sometimes they can’t work because of disparate data networks. I believe that some of the more prominent retailers will have their data act together in a few years.

“Dirty data” results from companies being around for a long time, so they’ve accrued multiple data sets and cloud providers, and their data hasn’t been merged and cleaned. If you don’t have the right data, you are making decisions based on bad or old data, which could hurt you strategically or literally.

What do you wish more people understood about retail technology and innovation?

Technology will not replace people. In my experience, technology is meant to enhance the human experience, which includes employees. If technology simplifies the process so much that the employees become idle, they are typically trained to manage the technology or cross-trained to grow their careers. Technology isn’t replacing the human experience any time soon, although it is undoubtedly changing the existing work experience – ideally for the better, both for the employees and the bottom line.

Technology doesn’t always lower costs for retailers. Hardware innovation requires significant capital expenses when it’s deployed chain-wide. Amazon’s “Just Walk Out” is impressive technology, but the infrastructure, cloud computing costs, and computer vision cameras are insanely expensive. In 5 years, that may be different, but today it is a loss leader. It’s worth it for Amazon because they can get positive press, demonstrate innovation, and show industry leadership. But Amazon has not lowered its operating costs with “Just Walk Out.” This is just one example, but there are many out there.

Online shopping will not eliminate brick-and-mortar shopping. If the pandemic has taught us anything, online shopping is here to stay – and convenience is extremely attractive to consumers. But I think people will never stop going to stores because people love shopping. The experience you get by tangibly picking something up and engaging with employees in a store location will always be around, even with the advent of the Metaverse.

What are some brands that excite you right now because of how they use technology?

Amazon. What they have been doing with Just Walk Out technology, dash carts, smart shelves, and other IoT technology puts Amazon at the front of the innovation pack. Let’s not forget that they’ve led the way in same or next-day delivery by innovating with their automated fulfillment centers! They have the desire, the resources, and the talent to be the frontrunner for years to come.

Alibaba. This Chinese company is another retailer that uses technology in incredible ways. Their HEMA retail grocery stores are packed with innovation and technology. They have IoT sensors across the stores, electronic shelf labels, facial recognition cameras so you can check out with your face, and robotic kitchens where your order is made and delivered on conveyor belts. They also have conveyors throughout the store, so a personal shopper can shop by zone, then hook bags to be carried to the wareroom for sortation and delivery prep – it’s impressive.

Walmart and Kroger. Both brands’ use of automated fulfillment centers (AFCs) and drone technology (among many others) are pushing the boundaries of grocery retail today. Their AFCs cast a much wider net and have expanded their existing markets, so, for example, we may see Kroger trucks in neighborhoods that don’t have a store in sight.

Home Depot. They have a smart app with 3D augmented reality and robust in-store mapping/wayfinding. Their use of machine learning is also impressive. For example, it helps them better understand what type of projects a customer might be working on based on their browsing and shopping habits.

Sephora. They use beacon technology to bring people with the Sephora app into the store and engage them. They have smart mirrors that help customers pick the right makeup for their skin tone and provide tutorials. Customers can shop directly through smart mirrors or work with an in-store makeup artist.

What advice do you have for retailers that want to invest in technology innovation?

My first piece of advice is to include change management in the project planning from the start.

There are inherent challenges in retail innovation, often due to change management issues. When a company has been around for decades or even more than a century, they operate with well-known, trusted, and often outdated infrastructure. While that infrastructure can’t uphold the company for the next several decades or centuries, there can be a fear of significant change and a deeply rooted preference for existing systems. There can be a fear of job loss because of the misconception that technology will replace people in retail.

Bring those change-resistant people into the innovation process early and often and invite them to be part of the idea generation. Any technology solution needs to be designed with the user’s needs in mind, and this audience is a core user group. Think “lean startup” approach.

My second piece of advice is to devote enough resources to innovation and give the innovation team the power to make decisions. The innovation team should still operate with lean resources, focusing on minimum viable products and proofs of concept, so failures aren’t cost-prohibitive. The innovation team performs best when it has the autonomy to test, learn, and fail as they explore innovative solutions. Then, it reports its findings and recommendations to higher-ups to calibrate and pivot where needed.

In closing, I’d say the key to innovation success is embracing the notion of failure. Failure has value! Put another way; failure is the fast track to learning. Learning what not to do and what to try next can help a retail company to accelerate faster than the competition. Think MVP, stay lean, get validated feedback quickly, and iterate until you have a breakthrough. And always maintain a growth mindset – never stop learning and growing.

Additional resources:


3 Reasons Companies Advance Their Data Journey to Combat Economic Pressure


By Danny Vally

Have you updated your organization’s data journey lately? We are living in the Zettabyte Era, because the volume, velocity, and variety of data assets being managed by companies are big and getting bigger.

Data is getting more complicated and siloed. Today’s data is more complex than the data a typical business managed just twenty years ago. Even small companies deal with large data sets from disparate sources that can be complicated to process. Each data set may have its own unique structure, size, query language, and type.

The types of data are also changing quickly. What used to be managed in spreadsheets now demands automated systems, machine data, social network data, IoT data, customer data, and more.

There are real economic advantages for companies that take advantage of the data opportunity by investing in digital transformation (often starting by moving data to the cloud). Companies that take control of data outperform the competition:

  • 40% more revenue per employee
  • 50% higher average net income on revenue
  • $100M in additional operating income annually

Common data journey scenarios that motivate data-driven investments include:

  • Understand and predict customer behavior in real-time
  • Cut costs and free up resources with simplified data analysis
  • Explore new business models by finding new relationships in data
  • Eliminate surprise and unnecessary expenses
  • Gather and unify data to better understand your business

A data strategy is more than a single tool, dashboard, or report. A mature data strategy for any business includes a roadmap to plan the company’s data architecture, migration, integration, and management. Building in governance planning to ensure data security, integrity, access, quality, and protection will empower a business to scale.

That roadmap may also include incorporating artificial intelligence and machine learning, which unleashes predictive analytics, deep learning, and neural networks. While these once were understood to be tools available only to the world’s largest businesses, AI and ML are actually being deployed at even small and midsized businesses, with much success.

We work with organizations throughout their data journey by helping to establish where they are, where they want to go, and what they want to achieve.

A data journey usually starts by understanding data sources and organizing the data. Many organizations have multiple data sources, so creating a common data store is an important starting point. Once the data is organized, we can harness insights from the data using reporting and visualization, which enables a real-time understanding of key metrics.  Ensuring data governance and trust in sharing data is another important step, which is often supported by security. Lastly, advanced data can use artificial intelligence and machine learning to look for data trends or predict behaviors and extract new insights. By understanding where your organization is in its data journey, you can begin to visualize its next step. 

Additional resources:


Digital Twins, Machine Learning, and IoT


Digital twins are part of the Internet of Things (IoT) interconnected system. In 2021, Accenture positioned them as one of the top five strategic technology trends to watch.

Image credit: Noria Corporation

As the name suggests, a digital twin is a virtual model designed to reflect a physical object. Companies like Chevron are using digital twins to predict maintenance issues faster, and Unilever used one on the Azure IoT platform to analyze and fine-tune factory operations such as temperatures and production cycle times.

With a digital twin, the object being studied is outfitted with sensors related to key areas of functionality to produce data about aspects of the physical object’s performance, such as energy output, temperature, and weather conditions. The data is relayed to a processing system and applied to the twin. 

Once informed with this data, the digital twin can run simulations, study performance issues, and generate possible improvements, all while generating insights that can be applied to the physical object.

Sometimes digital twins include a rich immersive visual experience, but that’s not always the case. Sometimes they have a simple interface or no interface at all.

Digital Twins are part of the evolution of IoT within the digital transformation. They are used often today in commercial real estate and facilities planning, and as we think about the metaverse, digital twins take on increasing importance with virtual spaces. When you think about the implications of machine learning on digital twins and the IoT, the possibilities for real-time smart monitoring get very interesting.

Imagine a large corporate campus that has been turned into an enormous digital twin that expands to other campuses and physical locations. What if that digital twin uses machine learning to optimize things like traffic, utilities, and weather? How could a global company use digital twins to have a complete model of the physical world?

Here is our biggest tip for anyone considering digital twins as part of a project strategy:

We like to start by considering the existing tools. A robust set of tools already exists through companies like Microsoft and Amazon Web Services TwinMaker (both of which are Valence partners).

Leverage existing industry ontologies (data dictionaries) like schema and naming systems and data formats for interchange within communities. You’ll benefit from established best practices and from broader operability between third-party vendors.

Microsoft contributed industry standards for digital twin definition language that make it simpler to build, use, and maintain digital twins.

The underlying services are provisioned automatically so developers can build upon a platform of services and extend the existing Microsoft or Amazon product. The process isn’t turnkey, and you won’t be able to create a digital twin using completely out-of-the-box tools, but the platform is managed for you, which lowers the operation costs. The platforms are also more secure and designed with best-operating practices in mind such as automatic back-up and built-in deployment automation.

Building upon industry standards will also save you time. For example, if you want to create a smart building solution and need to describe a building’s physical space, industry standards will help since software developers don’t usually have a facilities or building management background. An industry-standard model gives developers an advantage when creating a digital twin that their clients can understand and use.  

Data-driven solution

Digital twins create a platform to measure and store data. With the data available, you can test and answer both operational and business questions. For example, you can investigate fragile risky components in your supply/production system and explore opportunities to improve and expand new services. The key is that measuring and storing the data are essential steps before using any analytical tool.

Digital Twins are Evolving

While building a digital twin is more difficult than what can be done by a typical business user, we can develop these complex systems with a modest team of developers and designers. We typically only need to bring in highly specialized engineers when there are heavy integration and interoperability challenges with several vendors.

The technology is evolving, and early-stage challenges with vendor integration will improve over time, making it easier to transition a digital twin solution from one cloud provider to another.

One of the keys to digital transformation is challenging how we do things today to explore how to get more computerization and automation involved. Can digital twins improve your organization’s warehousing and distribution? Can digital twins improve the challenges faced in the supply chain? Can your sustainability goals be tested with a digital twin? There are many possibilities to consider!

Additional Resources: