Quantcast
Channel: Lucidworks
Viewing all 731 articles
Browse latest View live

Webinar Recap: Create the Ideal Customer Experience, Without Giving Up Control

$
0
0

I had the privilege of hosting a webinar with Peter Curran, President and Co-Founder of Cirrus10, and expert consultant on designing web ecommerce architectures.

You can listen to the entire webinar below, and I’ll mention some of the highlights of what Peter and I discussed.

Companies that have digital commerce businesses need to meet the demands of customers who are asking for more personalized experiences. Forrester reported that 36% of US online adults believe that retailers should offer them more personalized experiences, and Gartner has confirmed that sellers who personalize the customer experience see greater levels of customer engagement and higher retention. A Fast Company piece from June 2018 explains the challenge facing retailers as they think about Amazon:

“The problem is, most can’t possibly win against Amazon by playing the e-commerce giant’s game. To survive (and thrive) in a marketplace where price and convenience rein supreme, retailers of all stripes need to provide something that Amazon can’t: high-quality, human-touch customer service.”

Watch the webinar recording to hear Peter’s real-life examples of shopping for tents, utility pants, and lingerie. He highlights how poorly supported ecommerce browse and search experiences harm click-through, conversion, and revenue. And he compares those examples with sites that improve shopper loyalty by giving customers more relevant, human-touch shopping experiences.

Click here to watch the webinar.

Ecommerce companies should be able to use both business rules and machine learning. Our digital commerce solution hyper-personalizes search and browsing, while letting the merchandiser curate the end-to-end experience, for the “ideal customer experience, without giving up control.”

In the webinar, Peter discusses the challenges that merchandisers face maintaining huge sets of business rules in an ever-changing marketplace. Customer preferences change organically, new products come to market, and the rise of third-party marketplaces can double or triple the size of an e-retailer’s product catalog overnight.

None of these challenges mean that the business rules are going away. Rather, business rules and Artificial Intelligence must interact with each other if teams are going to manage the growing time commitment to maintain merchandising rules. Because Fusion AI can minimize the time they spend maintaining those rules, it frees merchandisers to dedicate more time to the activities where deep human insight and expertise is required.

A handful of companies have spent billions developing AI on their own platforms. (Peter introduced us to an acronym that names those companies: “G-MAFIA,” which stands for Google, Microsoft, Amazon, Facebook, IBM, and Apple. We mean no disrespect to the G-MAFIA, but thousands of ecommerce companies cannot invest their way into that exclusive AI club.

Lucidworks gives merchandisers a platform to:

  • Curate the shopper experience using their ecommerce expertise,
  • Personalize results for the shopper, and
  • Optimize the entire end-to-end process with applied machine learning.

Watch the webinar replay for hypothetical scenarios showing how Lucidworks Fusion combines human and machine intelligence, delivers relevant search results before the shopper knows to ask, and scales the number of ML models for product recommendations.

If you want to learn more about how to increase conversions on your site, boost order values, and improve customer loyalty, contact us and schedule a meeting with one of our expert consultants.

The post Webinar Recap: Create the Ideal Customer Experience, Without Giving Up Control appeared first on Lucidworks.


Can Machine Learning Help Give Boost to Flagging Healthcare?

$
0
0

According to the World Health Organization, more people than ever are living into their 60s and older; this population will comprise 20 percent of the world’s population in 2050.

Our longevity can be attributed to several factors, but the outsized role of advances in health care, although unquantifiable as a single factor, is undeniable.

If only healthcare could be provided efficiently to our over-surviving populace. It is no secret, however, that the industry is facing enormous challenges.

Shortage of Professionals

According to the Association of American Medical Colleges, by 2025, there will be a shortfall of between 14,900 and 35,600 primary care physicians; the shortage of non-primary care specialists is projected to reach anywhere from 37,400 to 60,300 by that same year. Updated 2018 data from the AAMC indicates that by 2030, the shortage will reach a whopping 120,000 overall (a shortfall of 14,800 to 49, 300 primary care providers, and 33,800 to 72,700 non-primary care providers).

Along with this shortage, myriad issues continue to derail the provision of quality healthcare including:

  • Provider burnout, which is strongly associated with medical errors.
  • Disparate electronic health records impede data-sharing of vital information such as patient history and medication profile
  • Drug and medical supply shortages
  • Resurgent diseases
  • Antibiotic resistance
  • Administrative inefficiency and issues with insurance reimbursement, claim-processing
  • Rising costs of care and medications

Despite the incredible connectivity afforded by technology in the 20th and 21st centuries and near-daily medical advances, few will argue: The U.S. healthcare system is broken, consistently ranking as the most expensive and the least efficient in countless reports.

Providers and their staffs are inundated with paperwork, growing demands to see more patients in less time, rising costs of malpractice insurance – all while striving to stay abreast of breaking studies and new treatment protocols.

Could providers use … an extra set of processors? Could the entire industry?

Artificial Intelligence: Medicine, Reimagined

The 21st century has brought about the growing use of technologies that use artificial intelligence (AI) to complete tasks and reasoning once performed by humans, including providers. Britannica defines AI as “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”

Granted, there is a strangeness, even creepiness to devices that are programmed to reason, discover meaning, generalize, and learn from experience, qualities that may seem amazingly – or alarmingly – human. But in one sense, AI is nothing new. The founding father of AI and modern cognitive science was British mathematician and logician Alan Turing. A codebreaker during World War II, Turing’s post-war research focused on the parallels between computers and the human brain. The scientist considered the cortex to be an “unorganized machine” that could be trained to be organized into “a universal machine or something like it.”

The term artificial intelligence, however, was not coined until a few years later, in 1955 by Turing contemporary John McCarthy, a mathematics professor at Dartmouth College. McCarthy defined AI as the “science and engineering of making intelligent machines.” Shortly thereafter, McCarthy convened the first AI conference and subsequently created one of the first computer languages, LISP.

It took a while (35 to 40 years) for computers to be a staple of providers’ practices, and a ubiquitous item in patients’ purses and back pockets. But once computers became commonplace in our society, AI-powered computers and robots were the next logical step, right?

But not so fast.

These new-fangled machines bring unwelcome anxieties and fears. Will human beings be displaced and replaced by these unnerving inventions? Will AI be running the world? What will people do all day when machines are doing everybody’s job? Are providers on the brink of extinction? These questions are justified, and the feelings are natural. But we’ve been here before.

Truth Stranger Than Fiction?

Culturally, AI has long been the subject of dystopian sci-fi movies and a recurring source of dread, foreboding, and even hysteria. Nearly 100 years ago, in the 1927 classic “Metropolis” the robot Maria mixed it up in a futuristic, mechanized society dominated by power, manipulation, and social engineering. That film came on the heels of a 1930s America mired in the throes of the Great Depression, massive automation, and culture-altering mechanization spurred “robot hysteria.”

And the trend continued. In 1968, we had the epic showdown between rogue computer HAL 9000 and astronauts in “2001: A Space Odyssey“, The Terminator fighting killer machines in the 1980s, and the manipulative and murderous robot Ava in 2014’s “Ex Machina.” For nearly a hundred years, our society has had a love-hate relationship with machines that think and act as humans do.

These fears bear examination and warrant validation; for providers, the influx of new AI-driven machines may seem to supplant physicians’ expertise, experience and mere existence. Will AI take over the roles of diagnosing patients and creating treatment plans? Who will run our practices, clinics, hospitals? Robots? Computers?

What exactly are these machines doing?

AI: Processes That Are Human-Like, But Not Human

Although the idea of smart computers that “learn” without our permission may seem alien, most of us already use AI in our everyday lives. Assistants such as Apple’s Siri, Amazon’s Alexa and other “personal assistants” may seem innocuous enough, but the truth is these and similar devices use machine-learning to better predict and understand our natural-language questions, interpret our speech patterns, and anticipate our future needs and habits.

Netflix, for example, analyzes, in a flash, billions of records to suggest films a consumer might like (and order or purchase) based on that person’s previous choices and ratings of films. Google very quickly anticipates websites of interest targeted to each user, Amazon has long been using – and constantly refining – “transactional” AI algorithms to keep track of individual’s buying habits to assess, predict and suggest purchasing behavior.

How do they do it? Welcome to machine learning (ML is one of the most common types of AI used in the medical field). ML is the application of statistical models to data using computers. Machines that use ML do not “think” independently; rather, they are programmed with algorithms – a set of criterion that establish a process for recognizing patterns and solving problems.

The jury is out on whether consumer-level leveraging of AI to suggest where we should eat or who we should link with on LinkedIn is useful. But there are other ways that machine learning is used that has positive global effects.  

Energy giant BP uses the technology in sites worldwide to maximize the use of gas and oil production, so it is safe and reliable –something we all want when we start our cars every day. Its data is also analyzed by engineers and scientists to plan for future energy needs and develop new model and energy sources. In a world with limited resources and a burgeoning global population, AI undoubtedly helps to keep prices low and oil and gas available.

GE Power similarly uses machine learning combined with big data and the internet of things to optimize operations, maintain equipment and attempt to develop digital power plants. We all like it when we turn a switch and the lights go on and take for granted that the H tap will keep hot water flowing into our homes.

Many financial giants use AI to predict market trends, allowing investors and retirement funds to allocate money wisely, as well as cut down on fraud and other financial crimes, which can drop the price of these types of services for individuals and employers who rely on their data for building retirement funds.

What Can You Do for Me, Mr. Roboto?

The innovations in medicine and related industries in 21st century has brought about an astounding array of new medications, advances in imaging, and reimagined treatment protocols, all at a break-neck speed.

How is a provider to keep up with all these changes, the endless journal studies and clinical trial data, new treatment options counter-balanced by black-box warnings, abrupt changes in long-held treatment protocols, reporting requirements, and harbingers of disease-outbreaks?

Not to mention the ever-increasing pressures that healthcare entails.

  • Data, data, data. AI-driven machines and their algorithms quickly incorporate myriad variables across a broad array of complex screening and diagnostic information and data. The information is processed to predict future events (e.g., what treatment protocol for disease X in population Y has led to the best outcomes based on previous cases and outcomes.) AI systems constantly update as new data are added, learning new associations and refining their predictive power in real time. Machine learning can help to reinforce currently successful interventions and – perhaps more importantly – reveal new trends and raise new questions based on trends in data.Ostensibly, once compatible global AI systems are in place, providers will have access to data gleaned from every patient treated for any disorder in every country, plus breaking medical discoveries and clinical trials outcomes, all housed within AI-driven machines and their algorithms. The incidence of medical errors and ineffective care caused by incomplete patient electronic health data, missed treatment opportunities, and misdiagnoses will be greatly reduced, perhaps even eliminated.
  • Eliminating bias. Another quality of AI is the blind nature of its data processing. Clinicians gain valuable knowledge through experience over time based past patient cases. This knowledge, however, can introduce prejudice to the decision-making process, and lead to misdiagnosis, treatment based on old data, and missed opportunities for more current approaches. If done correctly use of AI can eliminate these potential pitfalls, as clinical recommendations are not based on a priori assumptions, but rather on constantly updated medical data drawn from myriad sources.
    (CAUTION: Biased data can introduce the very biases you are trying to avoid.)
  • Building Patient Profiles/Predicting Outcomes. Recommending an effective treatment plan is predicated on certain provider tasks such as taking the patient’s history, performing a physical exam, and incorporating information from established research (all of which are vital in screening and diagnosis). The goal to deliver a specific outcome requires planning and implementation of said plan (actions needed for treatment and monitoring). AI computers can quickly “crunch” complex data from the physical exam, lab results, and extant study data to match thousands and even millions of global data sets to predict future events (the best treatment plan for the desired outcome), in a matter of minutes or even seconds.

Growing Support for AI in Medicine

AI is already a part of standard practice in many medical networks; data show that intelligent machines are saving lives. China is using an AI program called Infervision to keep pace with reviewing the annual 1.4 billion lung CT scans ordered to detect early signs of cancer. The application has proven successful in addressing the country’s radiologist shortage and provider fatigue (and medical errors) associated with reading endless scans. Infervision is programmed to teach its algorithms to diagnose cancer more efficiently.

InferRead DR Chest: Reducing missed diagnosis in conventional chest X-ray

 

The University of North Carolina is using AI from IBM to comb through data such as discharge summaries, doctors’ notes and registration forms to identify high-risk patients and formulate prevention care plans for them. Google’s DeepMind initiative aims to mimic the brain’s thought processes to reduce time needed for diagnosis and formulating treatment plans. The implementation of DeepMind at the University College of London to quickly assess brain images and recommend appropriate, life-saving radiotherapy treatments.

But not every AI-driven medical application has been a success.

IBM’s AI-driven machine Watson (whose original claim to fame was beating human contestants at “Jeopardy” on prime-time TV), was hyped as a state-of-the-art personalized cancer treatment genius to millions of patients and providers around the world. In 2012, MD Anderson Cancer Center at the University of Texas paid Big Blue millions to partner in bringing Watson into the fold. In 2017, however, the university withdrew from the partnership. The main problems, although not clearly articulated by MD Anderson, may have been issues with procurement, cost overruns and delays. In addition, expectations rose to unrealistic levels due to publicity and hype.

Authors of a blog published in “Health Affairs” who commented on Watson’s failure noted that AI was no replacement for human involvement in clinical care; machines and robots are not poised to take on all the reasoning and complex tasks inherent in health care. They did concede, however, that AI is suited for reducing time spent on resource-intensive tasks.  

Although Watson was not yet ready for prime time, the authors noted, “… carefully examining and evaluating opportunities to automate tasks and augment decisions with machine learning can quickly yield benefits in everyday care. Furthermore, by taking a practical approach to evaluating and adopting machine learning, health systems can improve patient care today, while preparing for future innovations.”

Don’t Let the AI Train Pass You By

The AI marketplace is exploding; according to a recent analysis by Accenture, AI will grow to a 6.6 billion industry by 2021; new health care AI initiatives are cropping up everywhere and investors are being encouraged to invest. This bodes well for providers; dollars invested in AI health care will assure that current applications are expanded, extant programs (such as Watson) will receive the tweaking and refinement they need, and that new AI opportunities are funded.

AI could hold the keys to improving our beleaguered healthcare system in ways that were unimaginable a couple of decades ago. Projections indicate that insurance companies could save up to $7 billion over 18 months by using AI-driven technologies to streamline administrative process. Several insurers are already using AI chatbots to answer simple patient questions and predict emergency-room visits.

Providers could similarly use AI for administrative tasks, freeing up staff for more patient-oriented encounters, expediting insurance reimbursement, and affecting referrals. That is in addition to the myriad care-related support function that AI can currently provide; many more are in the works and will be rolled out in the upcoming months, years, and decades.

Skepticism is natural, and thorough vetting is necessary to ensure that Silicon Valley and other Big Tech promises don’t override peer-reviewed research on the efficacy of AI applications; consistent testing and feedback from a wide array of providers in a variety of disciplines; and appropriate rollout to ensure the safety, accuracy and applicability of AI in medicine.

The robots and machines are becoming more powerful. No worries — there will always be a doctor in the house, and, like the new technology, she’ll be better than ever.

 

Lise Millay Stevens, M.A., is a contract medical communications specialist who has served at the New York City Department of Health & Mental Hygiene (Deputy Director of Publications); at the American Medical Association (managing editor of five JAMA journals) and then senior press associate and editorial manager of Neurology. Lise is a member of the American Medical Writers’ Association (past-president, Chicago), the New York Press Club, and the Society of Professionals Journalists.

The post Can Machine Learning Help Give Boost to Flagging Healthcare? appeared first on Lucidworks.

From Committer to Chairperson: Meet Cassandra Targett, the First Woman Chair of the Lucene PMC

$
0
0

This year marks the 10th anniversary for Cassandra Targett here at Lucidworks. That decade saw many “firsts” for her: The first woman committer in the Lucene/Solr project in 2013, six years with the Apache Software Foundation (ASF), and capped off most recently in being named the first woman Chair of the Lucene Project Management Committee (PMC). In addition to her new role as Chairwoman, she also serves as Lucidworks’ Director of Engineering.

Hi, Cassandra. Could you give us some background on what The Apache Software Foundation is?

CT: The ASF is made up of a Board of Directors that ensure each project has legal, community, and infrastructure support, providing each project with guidance on third party software that can be distributed, protecting our trademarks, and providing us with servers, mailing lists, issue tracking systems, etc.

Each project (Lucene in our case, which includes Solr as a sub-project) is self-governing. We have a great deal of latitude to develop and release software at our own pace and on our own schedule. We choose how we want to interact with users of our software, when to elect people to become committers, and generally create our own unique culture under the ASF banner.

Walk us through your personal history with ASF.

CT:  I first contributed to the Lucene/Solr project when Lucidworks donated the Solr Reference Guide to the project. We had committers writing for the community documentation and then also being asked to help with our version of it. In early 2013, we decided it would be better all the way around if we donated the Solr Reference Guide to the ASF and contributed our efforts directly there.

Since I had been the primary author of the Guide before we donated it, I led the effort to get it transferred to the project, and then was a primary driver of ensuring it was updated for each release of Solr. I was made a committer in August 2013 and was eventually invited to join the PMC in December 2015. As of 2019, the project has grown to about 75 committers (almost doubled from when I started), and the user base has expanded as well.

Tell us about your role as PMC. How do you balance your work?   

CT: As the The Chair of the PMC, I’m responsible for ensuring everything goes smoothly, and have the permissions to make changes to project records to give a new committer or PMC member their own permissions to do the new stuff they’re allowed to do.

It can be challenging to split up my time, and my responsibilities have shifted over the past two quarters. For the past three years I was a committer managing the Solr Team. Whatever time I had outside of planning with them, I contributed my own stuff to the community on behalf of Lucidworks; I was 100% focused on the community. With my new role as PMC and some reorganization at Lucidworks, I’m figuring out how to re-balance my time.

What do you hope to accomplish in this new role as PMC?

CT: My skill set is really around content and organization. And that’s what the role of Chair is all about — being the bridge between the project and the requirements set by the broader ASF. When I assumed the role, I was forwarded an email thread started years ago by earlier Chairs that had been passed down to successive Chairs. Besides trying to be as responsive as possible to the community needs there, I’d like to leave that a bit more organized, so everything a Chair needs to know is in a single easy-to-access place.

A secondary thing is to just not break anything that makes the community special. The most important thing is that everyone is independent and encouraged to be independent. Being the Chair to me doesn’t mean I now “own” anything about the community, my voice is equal to everyone else’s.

What future do you envision for the work you’re now doing at Lucidworks?

CT: My role at Lucidworks is Director of Engineering, and the team I’m managing is broadly responsible for Lucidworks’ data retrieval and analytics features. It’s really a wonderful team filled with incredibly intelligent and hard-working people at diverse stages of their careers, who have a wide range of personal goals.

In my view, I have two primary responsibilities – first, to use my best abilities to make Lucidworks successful; and second, to ensure my team is able to achieve the goals they have set for themselves. I want these to go hand-in-hand – where individual growth can be seen in the company’s results, and where our culture of encouraging people to bring their best selves to work means the company is as wildly successful as we want it to be.

Lucidworks currently has just under ten PMC members in addition to three committers. Check out the opportunities available to join our team today: https://boards.greenhouse.io/lucidworks.

The post From Committer to Chairperson: Meet Cassandra Targett, the First Woman Chair of the Lucene PMC appeared first on Lucidworks.

Running Solr w/ Etcd

$
0
0

Not a recommendation. Still experimenting. Not rigorously tested. Help needed. Intimidated.

Last week, while I was re-reading our blog post, Apache Solr with Kubernetes, I started thinking about the possibility of repurposing Kubernetes to handle service discovery for Solr instead of using ZooKeeper. After all, Kubernetes (via etcd) and ZooKeeper seem to do a lot of the same things. Could etcd perhaps make Solr run leaner and faster?

The idea of abandoning ZooKeeper is a little disconcerting, though. Anyone who comes from the Solr world will tell you if you are trying to run a Solr cluster without ZooKeeper, “Good luck!” ZooKeeper and Solr are tightly coupled. I heard this sentiment from some of the top Solr engineers on earth and the guys that said it are usually right.

But I started thinking about using etcd – which is the service discovery component of Kubernetes. What if there was a ZooKeeper-like API for Solr to talk to etcd without a whole lot of rework? Lucky for all of us, someone else already thought of that and created it, the kind folks at etc-io (formerly CoreOS).  They’ve created a library called zetcd allowing applications to serve the ZooKeeper APIs, but backed with an etcd cluster. Here’s how the etc-io team describes zetcd:

The zetcd proxy sits in front of an etcd cluster and serves an emulated ZooKeeper client port, letting unmodified ZooKeeper applications run on top of etcd. At a high level, zetcd ingests ZooKeeper client requests, fits them to etcd’s data model and API, issues the requests to etcd, then returns translated responses back to the client. The proxy’s performance is competitive with ZooKeeper proper and simplifies ZooKeeper cluster management with etcd features and tooling.

It was shockingly simple to run Solr with zetcd.

How to Run Solr With Zetcd

Here’s all you need to do to start zetcd (the ZooKeeper-like layer on top of etcd in Kubernetes):

Environment Setup

You’ll need to have Go installed and the environment variables setup. If not, there’s an example below in Step 1 or reference the link in the previous sentence. If you already have Go installed, you can jump  to Step 2.

Step 1 (Example on Mac OS X)

$ brew install go
$ export GOPATH="$HOME/go"
$ export PATH="$PATH:$GOPATH/bin"

Step 2

$ go get github.com/etcd-io/zetcd/cmd/zetcd
$ zetcd --zkaddr 0.0.0.0:2181 --endpoints localhost:2379

Start Solr in Standalone Mode

Step 3

Then, start Solr in Standalone mode. Here’s the cloud example:

$ bin/solr start -e cloud -z localhost:2181 -noprompt

Solr is now talking to zetcd, which is really etcd, but Solr thinks it’s talking to ZooKeeper. It’s that simple. Try it out and let me know how it goes. I am just starting to dig and have lots to learn.

Some features will not work like some of the commands you are accustomed to in the ZooKeeper CLI. Another more obvious issue is the fact that zetcd does not expose CPU and heap size. I am digging into the likely reasons for that and hope we can address them soon. Obviously, I don’t know what I don’t know yet.

I haven’t fully tested this integration, but I am trying and we expect to provide some feedback in the coming days. Any help on that front would be greatly appreciated. This evolution could be great for Solr because it will afford organizations more flexibility in terms of how they deploy the application or choose whether or not to use Solr in the first place.

Marcus Eagan is a software engineer based in Palo Alto, California and is head of Developer Advocacy for Lucidworks. Follow Marcus on Twitter @marcusforpeace.

Graphic from CoreOS.

The post Running Solr w/ Etcd appeared first on Lucidworks.

How to Form Your Digital Transformation Strategy

$
0
0

Polaroid is a good example of why even forward thinking companies must achieve agility to meet market needs. We have some suggestions for how to go about the process.

In 1991, Polaroid was a $3 billion company. In 2001, it went bankrupt, eventually selling its brand and assets. The successor company again filed for bankruptcy in 2008, and was eventually sold to the Impossible Project in 2017. Needless to say, today very little of the instant camera giant remains.

Prynt, a company that makes a device that allows you to print photos directly from your phone, raised $1.5 million on Kickstarter when 64 percent of projects fail to meet funding goals.

How did Polaroid go under when the Prynt initiative clearly shows consumers want the ability to print photos in, well, an instant? And can Polaroid teach us lessons about how important transformation is in an increasingly digital world?

How an Innovator Lost Its Way

As the only real player in the instant photography industry, Polaroid was the best positioned to move into digital photography early. We have the advantage of hindsight, but Polaroid’s failure to do so is even more puzzling because Polaroid had a history of pivoting to serve changing markets, aka transformation.

Founded in the 1930s as a polarization company, it shifted course after World War II to instant photography — and disrupted Kodak’s firm grip on the photography market. The company went on to launch many new products to varying degrees of success. It even went on to offer in the mid- 1990’s a “consumer digital camera, though it wasn’t much differentiated from its competition,” according to Christopher Bonanos in his book “Instant: the Story of Polaroid.

Polaroid’s half-hearted effort at digital transformation took back seat to its primary business of selling instant film.

Eventually, it was overtaken by firms offering digital images – as distinct from digital cameras. A few short years later, digital cameras became all but obsolete when battery life and storage advances turned every phone into a digital camera.

And today we’ve come full circle – the new Polaroid launched an instant print camera in 2015, just after Prynt launched its Kickstarter campaign.

Given the success of digital images, it seems likely that true digital transformation would have helped Polaroid. But just what is true digital transformation?

Digital Transformation Makes the Most of Data

Digital transformation is about finding ways to use data to drive efficiency, business insights, and revenue. And by data we mean internal data: customers data, buying behaviors, market penetration, and external data: market data, trend data, consumer data. Consequently, this pool of data would be unique to every organization.

This pool so to speak faces some challenges within organizations themselves. Those include difficulties managing, integrating, and analyzing enormous amounts of data; resistance to change within organizations; and lack of technologies that make changing direction difficult. The lack of enthusiasm to cannibalize a cash cow as evidenced by Polaroid is frustratingly common. (You can read more about these challenges and ways to overcome them in our previous blog, “Digital Transformation: What We Can Learn From the Experts.“)

Even with these challenges, Gartner estimates that digital transformation projects could account for 36 percent of total corporate revenue by 2020, according to ITPro.

So where and how do you start?

Digital Transformation Journey

You start by creating a new data-driven vision for the organization that helps define new business possibilities and opportunities. You implement changes iteratively, the better to see how changes impact the enterprise. The end goal is agility — the ability to respond to external and internal forces to remain a healthy organization.

Insights Derived From Data Direct Business Change

Digital transformation thrives on insights derived from data that, in turn, provide a new perspective into business activity and that allows creation of new business models. Insights are distilled from data in context. Insights back up a hypothesis. Data is a raw material.

Had Polaroid found data insights, it might have survived, and Business Insider wouldn’t have called Prnyt, the $150 case that “turns your iPhone into a Polaroid.”

Polaroid knew its highest profit margin was on instant film, but data insights may have given Polaroid’s leaders a new view on the profitability of its inkjet printer business – especially since “printer ink costs more than blood by volume and more than caviar by weight,” according to The Motley Fool.

Key question: Are you systematically looking for and creating new forms of insights?

To promote digital transformation, the insights dimension needs more data, more tools for analytics, faster cycle times for analysis, and more automation of data preparation and data engineering tasks. These are the raw materials of insights.

Awareness of Business Opportunities Can Drive Transformation

Awareness of business challenges and new opportunities comes from applying data insights to a larger section of the enterprise to see a richer view of the world.

Those insights can be the foundation for more complex and richer data models that, through analysis and visualization, allow us to see how events are connected and the choices that we can make.

If Polaroid had the awareness of the market they may have seen that the ability to create as many memories as possible would outweigh the need to have single snapshot of events — or 10 photos of one event.

In 2001, we were just three years away from the founding of Facebook, but social media was already widespread (MySpace anyone?). Polaroid’s leaders may even have foreseen that such spaces would provide permanent electronic records accessible from anywhere.

Key question: What types of new awareness are essential in your program of digital transformation?

To create awareness, organizations need better tools for modeling and exploring data, as well as interactive and automated analysis of it. The ability to find relevant data, create models models, generate reports, and use interactive environments is crucial.

The more the actions taken to serve the customer are digital, the more automation can grow. All of these raw materials create more human awareness but also lay the groundwork for autonomous systems.

Optimization Creates Value Across the Business

Digital transformation programs are iterative. Insights leads to awareness, and awareness leads to new business models. Those, in turn, should be implemented in stages, ideally with a small project that proves the product/market fit of the new digital model.

Optimization expands this nugget into a larger system that creates much more value.

This process continues until the central business model that drives digital transformation expands and adapts to all relevant situations.

In addition, the details of each new application must be taken into account so that the product or service provides maximum value.

Polaroid’s approach to bringing new products to market in the mid-1990’s was “a throw-it-against-the-wall-and-see-what-sticks strategy,” Bonanos wrote. He also noted that Polaroid could have abandoned the consumer digital business and focus on commercial applications such as medical imaging, identification systems, and scientific work. “There, Polaroid could dig into its knowledge base and beat everyone else to market with specialized digital products.”

Digital transformations often lead to unexplored territory that requires that the product keep evolving. The challenge is both moving in the right direction and also moving fast. If Polaroid had optimized its already-existing superior imaging technology into digital images, it may have avoided bankruptcy.

Key question: What adjustments and optimizations will be needed as your digital business model expands?

Functionality that supports optimization includes the ability to do statistically mature A/B testing and other forms of experimentation including exploring new data sources and formats. Companies need a strong data foundation to analyze and track usage of a product, as well as strong product management to allow collaboration about ideas moving a product forward.

Automation Allows Digital Business Models to Scale

Regardless of industry, the digital age means representing your product, service or process — digitally. Sometimes known as a digital twin, the concept is born out of the Internet of Things and Manufacturing 4.0. While it is possible for a business to grow strictly based on optimized processes, automation enables scalability in the longer term.

In Polaroid’s case, it had manufactured the digital camera, for both commercial and consumer purposes. But it hadn’t considered that people wanted a digital image. If only Polaroid had allowed others to make a digital twin of themselves!

We will look more at the role of the digital twin in the future. But in the meantime, companies that are embarking on it (ranging from healthcare, to media, to automotive) are finding that full automation is hard to achieve and usually involves refactoring business processes into a new architecture. What happens faster is human-guided automation that operates within guard rails so that people can direct the process .

Automation used to be about APIs and coding, and, of course, it still is. But increasingly, automation is powered by machine learning algorithms that can discover and recognize patterns that humans cannot.

Key question: What is your strategy for creating an automation architecture for your digital business model?

The functional landscape for automation must include as much coverage of the product by APIs as possible. This allows scripts, more advanced programs, and machine learning systems to control the behavior of the product or supporting system. As much instrumentation at all levels should be in place to support feedback loops to enable learning and problem solving.

Scalable Architecture Can Grow

Scalability of digital operations is an essential component of digital transformation. It allows the enterprise to build on the iterative successes they achieve and makes optimization possible. (Digital transformation does not always involve massive scale, though it often does.)

Scalable architecture can be built in stages, too. We often think of scalability as one that can expand, and even contract, when needed. For example, Amazon Web Services offers auto scaling, which adjusts when you need more performance.

But it can also mean that “the architecture can natively handle such growth, or that enlarging the architecture to handle growth is a trivial part of the original design,” according to Computer Hope.

It is not required to have the ultimate architecture for scalability at the beginning of digital transformation, but you must plan for growth.

Key question: Is your current architecture scalable for the current stage? Are there signs it is wearing out?

Scalability is not a one-size-fits all solution, rather, each need may require different hardware, software, expertise, and data.

For some technology, it is possible to buy scalable components. For others, the scalability must come by identifying bottlenecks and putting in the right abstractions so that load balancing and scale out approaches can be introduced when needed.

Digital transformation requires that scalability requirements always be present in the minds of the implementers.

Agility Completes Digital Transformation

The hallmark of the most successful digital businesses is the preservation of agility even as the businesses grow. “Agility is the ability of an organization to renew itself, adapt, change quickly, and succeed in a rapidly changing, ambiguous, turbulent environment,” said Aaron De Smet of McKinsey.

For many organizations, agility is the goal of digital transformation. Organizations today are operating in an ever-changing technology environment. To operate successfully when enormous change is happening outside of the enterprise requires agility.

Finding new uses for data and new insights from that data help create growth and stability – even under pressure.

Polaroid was not able to respond effectively to change in the marketplace even though it had some real advantages in the imaging and even printing worlds. It had a reputation for good commercial products and the company was known for investing in research. Yet, with all of these advantages, it did not capitalized on its early knowledge of digital cameras, nor did its leaders realize that the appetite for digital images was endless.

A digital transformation would have allowed Polaroid to adapt to external marketplace forces.

 

Evelyn L. Kent is a content analytics consultant who specializes in building semantic models. She has 20 years of experience creating, producing and analyzing content for organizations such as USA Today, Tribune and McClatchy.

 

The post How to Form Your Digital Transformation Strategy appeared first on Lucidworks.

3 Common Data Challenges That Hamper Your Customer’s Search Experience

$
0
0

In a recent post, Marie Griffin discussed AI-powered search’s ability to improve the customer experience, boost loyalty, and increase purchases. And while there is no doubt smart search can do all of that — the characteristics of product data can make that going rough and set you on a path to failure.

Artificial intelligence (AI) needs data to learn, and no matter how big a data set is, AI will fail if data quality is poor. Data often has structural issues, such as typos, irregular capitalization, and multiple labels for the same type of data. It can also be in different formats, be inconsistent, have outliers, and distractions.

Github Senior Data Scientist, Kavita Ganesan, explains how such data challenges hold companies back from adopting AI-powered solutions: “Whether its labeled or unlabeled data or search logs, organizations need to readily have a good data store in place for data scientists to explore and build models. Creating a highly accessible data store is a heavy investment and requires lots of data engineering time to make things happen.”

We’ve listed the three most common data scenarios that gunk up data training — and what you need to do to address them.

1. Messy Data Makes it Difficult to Find What You Need

Solution: Indexing pipeline, regex replace filtering, and scripting can help.

In an ideal world, your data is well organized, easy to sort through, and simple to understand. Unfortunately, you have to deal with the data you’re dealt. But there are ways to be clever with cleanup and massaging of messy data to improve discovery and classification, giving the user or customer the ability to easily navigate the site and find what they’re looking for.

Some basic janitorial duties include normalizing values. For example, you have a “color” field with values “Blue” and “blue.” A basic capitalize normalizer would funnel all bLuE values into just Blue. Using this not too-far contrived example, before and after data cleanup the facets look like this, using `clean_color` as a new field to be able to contrast them:

During indexing, the little bit of code below was added to capitalize the colors. In Fusion’s Index Workbench (IBW), this is done interactively; trial and error to get it right is how these things are tackled quickly:

The process of data ingestion and refinement requires several iterations to get ironed out, and in many cases is an ongoing effort of bringing in and refining new datasets to be effectively leveraged for querying. IBW allows quick visual iteration on datasource and parser configurations and index pipeline fiddling.

Important chunks of metadata often exist even in the most minimal of provided product data. A crude description of “This is a blue iPod armband” and some basic domain terminology cross references, this is straightforwardly ingested as:

This allows for smoother, more effective navigation of your data.

2. Little Data Structure — So You’re Missing Matches

Solution: Add structure with basic tagging and extraction up to automated classification.

Typos and messes happen, so being less strict in classification can ensure that you’re not mistakenly cutting out potential matches. Searching through data with fuzzy logic, phonetic spellings, and regex formatting are ways to avoid skipping over what you’re looking for.

But, really, before firing up anything to do with machine learning, before writing code, and before getting out the data cleaning tricks, it’s best to get the source data corrected. The data came from somewhere – push back upstream to see if you can correct the data (and even related processes) well before it reaches you. It’s worth a try and makes life easier when data starts clean and stays clean.

[If you’re starting from scratch, here are some guidelines on building a classifier.]

Here’s a real life example of how bad data impacts projects

Once upon a time, a friend was presenting a new, improved Solr-based search system to a room of big wig stakeholders, when a big wig domain expert pointed out that they could not deploy this system because of all the “bad data” it was showing. The folks in the room had never seen their data faceted, and it was easy to spot the mistakes.

”Financial (37)” was acceptable, but ”Finacnial (1)” was glaring. Just fix that record and re-index! Or if you can’t, keep reading for further solutions to that one poor record, left in a typoed category, and unfindable through intuitive navigation.

Imagine a product like “ipad” — where “ipda” is as likely to be typed.

And here’s a reinforcement: “However if a document is not appropriately tagged, it may become invisible to the user once the facet that it should be included in (but is not) is selected,”said our beloved Ted Sullivan in a prescient, aged article titled Thoughts on “Search vs. Discovery.”

3. Data That Doesn’t Learn Along the Way Creates Stagnancy

Solution: Collect user signals and feed improvements back into the system.

Considering the usage of our data systems — everything is a search-based app nowadays — there’s a lot to be gleaned from the queries and what users are accessing them from. We’ve gotten pretty good at search and logging, but the missing magic is when the two are married with machine learned improvements that continually tune results.

Your data is being used. Or is it? And who’s using it? Are some products/documents not seeing the light of day because they rarely appear in the search results? Looking into your knowledge system usage helps identify hot topics and trends, but don’t forget to look in those dark and dusty corners where things are hiding and forgotten.

  • Learning about usage helps identify spelling corrections. Learn from me when I want ”bleu shoes,” oops, backspace backspace, correct, I mean ”blue shoes.”
  • Learn that the item on the second page is really more relevant and useful because I clicked on it, skipping over the first page of results.
  • Learn the relevant parts of queries using Head/Tail analysis so your system better understands users’ intent on unclearly phrased queries.

The short of it? You’ve got data. And you’ve got metadata, such as domain terminology, categories, taxonomies, and the like. Make the most of what you’ve got, using modern tools that can fix, improve, and learn from these things.

Data needs care and feeding to realize its full glory. In the case of retailers, successful systems power incredible customer experience. Be sure your data toolbox includes everything from basic regexes to machine learning to avoid those three most common obstacles to the data discovery and classification that’s feeding your shopper’s search and experience.

Businesses that put in the time and money to address these three most common (and fixable) pitfalls to data discovery and classification will take the lead in the long run. Being better prepared to incorporate the latest machine learning tools into their search experience will enable them to deliver the highly-coveted, best-in-class customer experience.

Erik explores search-based capabilities with Lucene, Solr, and Fusion. He co-founded Lucidworks, and co-authored ‘Lucene in Action.’

The post 3 Common Data Challenges That Hamper Your Customer’s Search Experience appeared first on Lucidworks.

Webinar Recap: Question Answering and Virtual Assistants with Deep Learning

$
0
0

I recently hosted a webinar with my colleague Andy Liu on “Question Answering and Virtual Assistants with Deep Learning.” I’ve linked the full replay below and am sharing this post to break down some of the points that Andy and I covered in more detail.

Watch the Entire Webinar Recording “Question Answering and Virtual Assistants with Deep Learning”

First, some definitions (with apologies to the Deep Learning aficionados out there who already know some of this.) A Question Answering (QA) System is a system that automatically answers questions asked with natural language. Questions tend to be slightly longer than a standard key word search.

A chatbot is a computer program that uses natural language processing (NLP) and artificial intelligence (AI) to answer questions and elicit more questions to conduct a conversation.

Deep Learning is part of a broader family of machine learning methods based on the layers used in artificial neural networks. Learning can be supervised, semi-supervised or unsupervised.

Current chatbot solutions are usually narrowly focused on a few specific areas but do not cover a broad range of scenarios. They may be able to answer questions for common use cases like hotel booking or restaurant reservations out of the box, but they can handle only a limited number of questions.

Chatbot administrators manually provide examples, specify patterns of intent, build ontologies or use rule-based approaches. These chatbots only rarely use the company’s own knowledge bases. They aren’t really learning and having a conversation — they are just following a script.

Deep Learning to Power QA Systems

At Lucidworks AI Lab, we have taken a different approach to designing a virtual assistant system. We use Deep Learning to power QA solutions.

Call centers and internal IT teams have used these systems for self-help avoidance of calls or tickets. E-commerce teams have used rules engines for product recommendations.

Some have extracted QA pairs from Slack and email conversations to extract some knowledge. But those legacy systems can’t scale their intelligence with the pace of the modern business. They can’t keep up with changing customer expectations. Based on static rules and ontologies, they fall behind, fall out of sync, and provoke users to go back to Google for answers.

The Lucidworks Question Answering system automatically recommends answers from a dynamic FAQ pool or turns to historical data and finds relevant answers to similar questions that have been asked before.

Even if no historical question-answer pairs exist, you can begin with a cold-start solution, based on unsupervised learning. Overall, you’ll be able to  improve search for long queries by leveraging neural semantic search capabilities.

Deep Learning With Small Training Sets

The Labs team conducted a comprehensive study comparing various methods and learned that an approach based on Deep Learning provides the best results, even with small training data. This approach doesn’t require some of the heavy feature engineering work required by classical approaches to NLP.

QA systems can be trained in Docker containers, which makes it easy to use either on-prem or in the cloud.

Paired with auto pilot mode, it provides automatic parameter tuning and dynamic decisions based on data and hardware information. This means that users who are not data scientists can run QA modules and get state-of-the-art results with minimum supervision. Or, data scientists can use an advanced mode, without the standard guardrails.

Fusion supports optimized query pipelines that enable fast run-time neural search. It leverages Solr token-based search or dense vector clustering for retrieving initial candidates for further reranking by the QA model.

As we mentioned in the webinar, this functionality is not yet generally available in Lucidworks Fusion, but is available to select customers through the summer. If you’re interested in evaluating this Question Answering (QA) functionality, you can pilot the Lucidworks Question Answering solution in early preview before it becomes generally available this summer.

Sava is AI Research Engineer at Lucidworks, whose main focus is researching, developing and integrating cutting-edge NLP algorithms into Fusion AI platform. His work helps users obtain more relevant search results that are also enriched by additional information like sentiment, topics, extracted named entities etc.

The post Webinar Recap: Question Answering and Virtual Assistants with Deep Learning appeared first on Lucidworks.

Banking Taps AI to Spot Money Laundering Schemes

$
0
0

In April, Danske Bank lowered its 2019 outlook as it deals with the fallout of a $223 billion money laundering scheme. Swedbank’s stockholders are nervous about U.S.-scrutiny of its money laundering policies as it, too, has been caught up in the probe, Reuters reports. And a regional Russian bank owned by a former U.S. congressman from North Carolina had its license revoked on April 5th for violations related to money laundering rules, according to American Banker.

This despite the fact that, for decades, governments around the world have pressured banks and financial services firms to serve as partners to law enforcement when it comes to preventing criminals and corrupt public officials from disguising money gained through illicit means as legitimate income — money laundering — and moving it into the financial system.

Today, there are signs that the U.S. government is taking the first steps to require that financial services firms adopt artificial intelligence (AI) and machine learning-based tools to help root out money laundering.

The evolution of rules and standards that banks and financial services firms must follow to remain in compliance with the anti-money laundering (AML) requirements enshrined in the Bank Secrecy Act and other legislation has followed a predictable pattern in the past.

Typically, large institutions with the capacity to invest in cutting-edge technology and processes will attempt to innovate, finding ways to make compliance more efficient and effective. If regulators see value in a new approach, they will give the firms license to experiment with it.

Eventually, though, as new methods are perfected, what was once seen as an innovative BSA compliance technique starts to be seen as a best practice — something every bank should be doing.

For example, more than two decades ago, large banks that were sensitive to the headline risks of being caught with dirty money on their books sometimes went to great lengths to identify the beneficial owners of companies they did business with. It was a sometimes tortuous process that involved peeling back layers of shell companies to identify the true owner of a company or an asset.

Eventually databases and processes were developed on a large enough scale that regulators decided identifying beneficial owners was no longer optional. In May of 2018, a new customer due diligence rule made it a requirement.

AI and Machine Learning Can Improve Anti-Money Laundering Efforts

One can see the same sort of development beginning in the use of AI and machine learning in the anti-money laundering space.

For banks, BSA compliance has always been an extremely data-intensive process, and knowing who owns the companies they do business with is only one part of the puzzle. Banks need to verify that individual customers are who they say they are. They also want to know who their customers do business with, whether they are related or otherwise connected to public officials or known criminals and terrorists, and whether they exhibit patterns of behavior consistent with illegal activity.

Doing this effectively requires assembling and analyzing vast amounts of information from a wide array of sources — news reports, financial statements, corporate filings, judicial records and much more.

In December, the major financial services regulatory agencies in the U.S., including the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, the Financial Crimes Enforcement Network (FinCEN), the National Credit Union Administration, and the Office of the Comptroller of the Currency, issued a joint statement recognizing that many banks are experimenting with new technology to help them process that data more effectively.

“Some banks are experimenting with artificial intelligence and digital identity technologies applicable to their BSA/AML compliance programs,” they wrote. “These innovations and technologies can strengthen BSA/AML compliance approaches, as well as enhance transaction monitoring systems. The Agencies welcome these types of innovative approaches to further efforts to protect the financial system against illicit financial activity. In addition, these types of innovative approaches can maximize utilization of banks’ BSA/AML compliance resources.”

At the same time, FDIC Chairman Jelena McWilliams signaled that using big data to assist in anti-money laundering compliance won’t always be the province of the financial services industry’s biggest players.

“New technology, such as artificial intelligence and machine learning, can provide better strategies for banks of all sizes to better manage money-laundering and terrorist-financing risks, while reducing the cost of compliance,” she said.

Big Gains in Transaction Monitoring

Given that criminals also benefit from advances in technology, it’s actually past time for financial services firms to begin incorporating AI and machine learning into their compliance processes, says Patrick Craig the financial crime technology lead for EY’s Europe, Middle East, India and Africa practice.

“The current AML approach is struggling to keep pace with modern money laundering activity,” he wrote. “There is a real opportunity for AI not only to drive efficiencies, but more importantly to identify new and creative ways to tackle money laundering.”

One of the key challenges for bank anti-money laundering programs is transaction monitoring — the process if combing through transaction data to identify potential suspicious activity.

Current automated systems produce a large volume of “false positives.” That is, they flag as suspicious transactions that are actually normal activity, which requires further investigation by bank staff. Unfortunately, by some estimates, up to 99 percent of transactions flagged by current systems are false positives.

“AI offers immediate opportunities to significantly reduce operational cost with no detriment to effectiveness by introducing machine learning techniques at different stages of the transaction monitoring process,” Craig writes. “AI is also being increasingly applied to customer due diligence and screening controls using natural language processing and text mining techniques.”

AI Can Help With Know Your Customer Rules

So-called Know Your Customer rules — requiring banks to do extensive and ongoing due diligence on customers — are another area in which AI shows huge promise for improving both efficiency and performance, he said.

“AI could bring increased breadth, scale and frequency to holistic KYC reviews in a way that better integrates ongoing screening and monitoring analysis. Risk and detection models would assess and learn from a richer set of inputs and produce outcomes in the context of both the customer’s profile and behavior. By leveraging AI’s dynamic learning capability coupled with skilled investigators, this model could be used to augment operations, provide quality control and even be used to train new resources.”

According to Scott Zoldi, chief analytics officer at Fair Isaac Corp., AI and machine learning techniques can provide a “superhuman” boost to financial services professionals working in the AML area. But first, you have to be able to explain it to them.

The systems that are being developed, he said, are able to establish connections from disparate data sources to create a clearer picture of clients’ activity. For example, algorithms can be developed to identify clients whose activity trips a number of different threshold levels simultaneously. For example, a client whose average transaction by dollar amount, number of deposits originating overseas, and velocity of money flow to “risky” countries are all above a bank’s 90th percentile, would be flagged for extra scrutiny.

In the past, any single one of those triggers might have been enough to send up a red flag, resulting in a flood of false positives. But a system empowered with machine learning techniques can narrow the focus considerably.

According to Zoldi, results are “based on the totality of data used to construct the machine learning model and is probabilistic.” It can also return specific reasons why an individual was flagged, giving investigators a head start in the effort to determine whether further action is required.

“Machine learning for AML can help with key compliance challenges like false positives, gaining new insight and understanding of customer behavior, and providing decision logic that is clear and explainable. Clearly, machine learning technology adds a “‘superhuman”‘ boost to the efficacy of AML efforts!” Zoldi wrote.

Algorithms Can Be Biased

However, turning over much of the hard work of detecting possible money laundering activity comes with its own set of risks. Writing in the Harvard Business Review in August 2018, Lisa Quest, Anthony Charrie, Lucas du Croo de Jongh, and Subas Roy warned that algorithms are only as good as the people who create them, and can have bias and blind spots built into them inadvertently.

They recommend a robust system of back testing results, to protect against a variety of possible problems. The idea is to uncover issues, such as patterns of false positives, that could lead to discrimination allegations; systematic failure to recognize specific activities as suspicious; and failure to keep up with new approaches criminals develop to move illegitimate funds into the financial system.

“To prevent this from happening, companies need to create and test a variety of scenarios of cascading events resulting from AI-driven tools used to track criminal activities,” they wrote. “To outsmart money launderers, for example, banks should conduct ‘war games’ with ex-prosecutors and investigators to discover how they would beat their system.”

The bottom line, according to McKinsey & Company’s Stuart Breslow, Mikael Hagstroem, Daniel Mikkelsen, and Kate Robu is that banks that invest in AI and machine learning technology for anti-money laundering compliance, with the proper safeguards in place, can reap huge benefits. The technology can drive compliance-error rates down below 5 percent, compared to an industry average of about 30 percent. Additionally, the vexing problem of false positives can be cut nearly in half.

They write, “Banks that invest strategically in these three areas, rather than tactically reacting to market and regulatory changes, can over time substantially reduce their risk exposure and capture other substantial benefits.”

Rob Garver is a DC-based financial services reporter.

The post Banking Taps AI to Spot Money Laundering Schemes appeared first on Lucidworks.


How Employees Are Key to Digital Transformation

$
0
0

The momentum for companies to undergo digital transformation continues to grow. However, too often, digital transformation is thought of as just a technological change, forgetting that we need stewards of change. It turns out that for companies to succeed with digital transformation, they have to shift focus beyond technology and invest in their people.

Telstra, Australia’s largest telecommunications carrier, offers digital transformation services worldwide. As the underbelly of countless enterprises and organizations around the globe, Telstra surveyed 3,810 respondents from 12 industries in 14 markets around the world on where they were on their transformation journey.

As highlighted in the resulting Disruptive Decision-Making Report (as well as other research) businesses are actually further along with transforming their technology than they are with changing the corporate culture and individual mindsets of employees — both of which are necessary to profit from digital transformation.

In fact, Telstra found that organizations currently have the lowest confidence in how their people are contributing to digital transformation decision-making, when compared to facets such as technology and processes.

It’s More Than Dropping in Technology

According to the survey, organizations that have devoted the most attention to people and processes are more digitally mature and more advanced in their transformation journeys. In short, companies that overlook or ignore the people side of digital transformation will fail.

But like so many aspects of digital transformation, achieving this focus is easier said than done. The survey’s authors recommend three steps companies can take to improve the people dimension of digital transformation:

  1. Understand what digital transformation means for your organization
  2. Empower people and strengthen processes
  3. Be confident in your technology so you concentrate on driving change

These recommendations support the idea that the only way to thrive in the people dimension is by ensuring that digital transformation isn’t seen as an outside force that is inflicted on staff. Instead, it must be done in partnership, with those at the C-level setting a vision that has buy-in at all levels of the organization.

To gain a better sense of how companies are navigating these changes in practice, we looked at some best practices and case studies. We summarize them below.

Change Is an Organizational Anathema

“We know from neuroscience – and from everyday life – that groups tend to resist change,” says Ellen Leanse, chief people officer at Lucidworks, who has taught neuroscience and innovation at Stanford University. “Employees join an organization, and when it comes time to change, they tend to grasp to the status quo rather than look objectively at what the change will bring about.

Leanse, who speaks of the brain as “an ancient technology we use to navigate modern life,” explains that the brain tends to resist change and unknowns as part of its default fast thinking mode.

“The brain, ultimately, is a survival device. It optimizes for familiar behavior and often resists objective, open-minded, curious thinking. After all, it has no proof that the ‘‘new’’ will keep it, or us, safe and alive. Research shows that shared beliefs and behaviors stay even more entrenched in group settings. It’s easy to see some evolutionary benefits for that sort of psychology. Yet we all know the price when ‘groupthink’ sets in.”

To address this, Leanse suggests getting buy-in on a shared vision: something the group is willing to agree is a worthy goal. Then, emphasize cooperation and curiosity as paths to achieving this vision. This helps guide groups out of familiar, comfortable behaviors and find motivators (such as social connection and a sense of learning) that increase the satisfaction associated with the goal.

Help them feel like owners, empowered to meet their own (and shared) goals, as well as the organization’s. “When transformation is seen as a shared journey to a result a group agrees to,” says Leanse, “discomfort and resistance can fall away.”

Culture and People Are as Important as Technology

As new technology is being inserted into nearly all aspects of the business, a profound shift is taking place. As an article in the Enterprisers Project points out, this shift has a significant  impact on company culture.

“Digital transformation initiatives often reshape workgroups, job titles, and longtime business processes. When people fear their value and perhaps their jobs are at risk, IT leaders will feel the pushback,” write the article’s authors.

To achieve this unity, the article suggests IT leaders should start with empathy to understand fully the anxieties and fears staff might have about integrating new technology.

A 2018 survey from McKinsey & Company came to similar conclusions. While emphasizing the importance of technology in transformations, the report also found that companies with strong leadership and enterprise-wide workforce planning and talent-development practices had higher success rates than those that did not.

Empower workers. In an MIT Sloan Management Review analysis of digital transformation, George Westerman, Didier Bonnet, and Andrew McAfee argue that worker empowerment is vital to positive digital transformations. This can involve showing workers how the new technology will help them do their jobs more efficiently or the benefits of a more digital workplace that allows staff greater freedom to work when, where, and how they want to.

The McKinsey survey also reinforces the idea that workers should be empowered. “Another key is giving employees a say on where digitization could and should be adopted. When employees generate their own ideas about where digitization might support the business, respondents are 1.4 times more likely to report success, it says. The report also recommended that workers play a key role in enacting changes.

Prioritizing culture. One of the key traps many companies fall into is underestimating the importance of culture in digital transformation. As Greg Satell argues in a piece for Inc., digital transformation is human transformation. To gain buy in, a strong vision that aligns with business objectives is crucial. But so is starting transformation with automation that improves the day-to-day lives of staff, reducing the need for tedious tasks.

Articles in Jabil and CIO also point out that companies must engage their staff from the outset in the transformation process to overcome employee pushback and larger organizational resistance.

These articles and the Telstra survey illustrate that technology can take an organization only so far when it comes to digital transformation. People play a key part, as technology can’t replace the need for organizational cohesion that comes only when everyone from the C-suite and down understands why change is necessary, what the changes will be, and how the business will improve as a result.

But it all starts with changing what we look for in employees, says Leanse. “By changing our perspective from looking for people who will just follow orders, we need to make sure that we hire people who have excelled in cooperation and who embrace empowerment.

Cooperation says we are all in it together, she explains. And empowerment means you are giving your employees the tools to learn, grow and control their autonomy — while they help the organization.

Dan Woods is a Technology Analyst, Writer, IT Consultant, and Content Marketer based in NYC.

The post How Employees Are Key to Digital Transformation appeared first on Lucidworks.

Learn How the Fortune 500 Combine Search and AI to Build a Better Online Shopping Experience

$
0
0

On Wednesday May 15th, Lucidworks is hosting its first Activate Now event in New York City. Register now to hear from leading digital commerce companies on their strategies for building a hyper-personalized shopping experience for customers and the impact artificial intelligence is having on the role of merchandising. The event will include talks and panels from retail industry leaders and networking opportunities for attendees.

Activate Now is Lucidworks first one-day digital commerce conference, an off-shoot of the annual Activate search and AI conference. This event is a localized opportunity to learn from and network with digital commerce leaders. Attendees will hear how brands create personalized shopping experiences using search, AI, and data analytics to understand customer intent, deliver relevant recommendations, and increase add-to-cart and conversion rates.

“Our retail customers are committed to delivering the same level of expert suggestions and personal customer experience online that they do in our store to drive revenue and loyalty,” said Will Hayes, Lucidworks CEO. “How we get them there continues to evolve and grow. Relying on machine learning to augment our merchandisers’ capabilities and deploying AI that enables smarter recommendations for shoppers is one way to create that personal customer experience and drive purchases. We’re looking forward to sharing more best practices from some of the biggest brands at Activate Now.”

The event will be held at the Helen Mills Event Space and Theater at 137 West 26th Street in New York City. Full schedule is below:

Registration & Networking: 8:30-9:00am

Delivering a Hyper-Personalized Customer Experience

  • Will Hayes, CEO, Lucidworks

How AI Might Impact the Role of Merchandising (panel)

  • Ronak Shah, Architect, major homeware brand
  • Katharine McKee, Founder, Digital Consultancy
  • Liz O’Neill, Sr. Digital Commerce Mgr, Lucidworks
  • Diane Burley, VP Content, Lucidworks

Why Customers Can’t Find What They Are Looking For

  • Peter Curran, President and CEO, Cirrus10

Enhancing Search Experience at Speed and Scale

  • Pavan Baruri, Director – Core Services (Customer Experience), Foot Locker

AI-powered Search for Increased Conversions

  • Grant Ingersoll, CTO, Lucidworks

Networking Lunch, 12:30-2:00pm

Join e-commerce leaders for education, inspiration, and networking. Tickets are $199 for attendance and lunch. Press passes are available with valid accreditation. Full details at https://bit.ly/2XDNh8i.

The post Learn How the Fortune 500 Combine Search and AI to Build a Better Online Shopping Experience appeared first on Lucidworks.

User Intent Steers AI-Powered Search

$
0
0

Consumers expect personalized experiences on retail sites. After all, they get them on social media, in entertainment, on mobile devices, and while they search the web — why settle for less when shopping?

Marketers agree. A recent survey of 700 U.S. marketers conducted by ClickZ and Chatmeter found that fully half of respondents identified “changing customer behavior driven by new technology” as the number one search trend of 2019.

In ecommerce, the shopper’s search experiences can vary widely. Amazon’s search and recommendation engines, powered by artificial intelligence (AI) and machine learning (ML), provide highly relevant and personalized results based on a shopper’s prior purchases and browsing, among other factors, but the search experience on most other retail sites is often disappointing.

The situation is beginning to change, though. Trends in distributed computing and open-source programming are making it possible for organizations without enormous scale and resources to dramatically improve ecommerce search and recommendations for shoppers — when they work with the right partner.

To find out why search is challenging from a technical perspective and how that’s changing, Lucidworks tapped the expertise of Peter Curran, president and cofounder of Cirrus10 and a consultant on the business and technology of ecommerce, in a question-and-answer session.

Curran has been a featured speaker in two recent Lucidworks webinars, Why Your Customers Can’t Find What They Are Looking For and Create the Ideal Customer Experience—Without Giving Up Control. He has overseen the implementation of Lucidworks’ Fusion search application with three clients, but he also works with other solution providers and on projects other than search.

Q: What technologies are making it possible for ecommerce companies to improve their search significantly?

A: With the emergence of ML and distributed computing, companies can do things they couldn’t do before without writing a bunch of rules. For example, Lucidworks Fusion has a distributed computing architecture based on Apache Spark that is integrated with a search platform. Distributed computing allows search engines to handle more scenarios algorithmically, and Fusion can do a lot of complicated ML calculations that would be too computationally expensive without this marriage.

Q: Why has search been such a challenge for e-retailers?

A: E-commerce search is hard to manage because there are a million different ways people can tell you what they want, and it’s hard to predict the words they’re going to use. Then, the language itself is complicated. For example, if you search for shorts, you might get a pair of short pants, but you might also see a short-sleeve shirt, a short skirt, a “skort” or any number of other things. This is happening because the word short and shorts have a common root, and the search engine can’t distinguish between short as an adjective and shorts as a noun. To make matters worse, what we normally call a pair of shorts is technically called just a “short.”

To deal with these issues, retailers set up rules, but manual rules might cause different problems. An example is a search for “leopard print.” In the results from a website I showed in one of the Lucidworks webinars, I get prints, as in wall art, with various animal images in my search results, but I don’t get any garments with a leopard pattern, which is what I wanted. This started when someone — probably someone focused on home décor — decided that the word leopard should be equivalent to the word animal but didn’t think about “leopard print” fabrics.

Tuning searches manually not only takes a long time to do in the first place, but once you’ve put in a manual curation rule, you have to maintain it as time goes on. Rules don’t last forever because products and searches change. You also have limitations on the number of rules you can put in. A manual process is painful and just not sustainable.

Q: AI and machine learning can process much higher volumes of data much faster than humans, but why doesn’t it make the same kinds of mistakes?

A: Machine learning is about pattern recognition and drawing conclusions based on patterns. Lucidworks uses what’s called signal ingestion or signal processing. Signals are all the user interactions — queries, clicks, add-to-cart, checkout, etc. — as well as all the profile and/or third-party data you have about a session. A signal is a broad concept, but there is a clear set of best practices.

The system isn’t “understanding” the meaning of the words; it is learning through search behaviors which words and combinations of words are important to determine the searcher’s intent.

Let’s say people are searching a site like Best Buy or Walmart for “dysentery.” The system doesn’t know that it makes no sense for a person to search for dysentery, which is an intestinal infection, on a normal big-box-store ecommerce site. But by going through enough sessions, the ML begins to see that people who search dysentery are buying Dyson vacuums.  No matter how informed they are about the product or the customer, a human merchandiser would never think to create a rule like “dysentery=Dyson.” The pairing is not logical to human intelligence, but it works for real-life searches because the iPhone’s internal spelling correction changes the word Dyson to dysentery.

“a human merchandiser would never think to create a rule like “dysentery=Dyson.”

Q: Retailers have inventory they need to sell and promotional commitments to meet. They need to match products to the right buyer, not necessarily match the buyer to the product. How can AI help with that?

A: Those types of business needs are just another feature to engineer, and it can be done either at the ML model level or it can be done as a human override. When you build an algorithmic search experience, you simply want to reduce the number of decisions that humans are making.

With Fusion, the system sometimes suggests new rules based on the data, and the user can decide whether to accept it. Fusion also allows merchants to come up with rules so that they can achieve business goals, but they have to test them and experiment with actual searches. As a fellow speaker in one webinar put it, “Experimentation is the best way to make a data-driven decision.”

Fusion is powerful because it gives a retailer just-add-water-type solutions to tune common ecommerce searches, and those have been tested over millions of searches with numerous retailers. Fusion 4.2 provides a business rules manager that gives retailers the ability to build their own models and plug them into the Lucidworks framework to customize it.

Q: We’ve just touched the surface of how AI and ML can transform search for consumers and retailers. What are your final thoughts?

A: I spent the beginning of my career in web content management, where we were always trying to make companies more efficient. I like working in ecommerce search because the work tends to make companies more money. The reason is that search shows intentionality. People who search generally want to buy, and the old saying is absolutely true: If they can’t find it, they can’t buy it.

So, if your search is hiding relevant results, or if your relevant results are lost in a sea of irrelevant results, or if your search UX is messed up, then an investment in addressing these things will almost always pay off. Some of the companies that have implemented Fusion have been able to return their investment in a matter of days. It can be quite an enormous and quick return.

Marie Griffin is a writer and editor with extensive experience covering retail, technology, media and other B2B topics. She has held multiple editorial leadership positions, including editor/associate publisher of Drug Store News, and has been freelancing for web/print publications and marketers since 2001.

The post User Intent Steers AI-Powered Search appeared first on Lucidworks.

Using Solr Tagger for Improving Relevancy

$
0
0

Full-text documents and user queries usually contain references to things we already recognize, such as names of people, places, colors, brands, and other domain-specific concepts. Many systems overlook this important, explicit, and specific information which ends up treating the corpus as just a bag of words. Detecting, and extracting, these important entities in a semantically richer way improves document classification, faceting, and relevancy.

Solr includes a powerful feature, the Solr Tagger. When the system is fed a list of known entities, the tagger provides details about these things in text. This article steps through using the Solr Tagger to tag cities, colors, and brand, illustrating how it could be adapted to your own search projects.

What is the Solr Tagger?

Entity recognition isn’t new, but prior to Solr 7.4, there was a lot of Python and hand-wavy stuff.This got the job done but it wasn’t easy. Then Solr Tagger was introduced with the release of Solr 7.4 an incredible piece of work by Solr committer extraordinaire, David Smiley.

The Solr Tagger takes in a collection, field name, and a string and returns occurrences of tags that occur in a piece of text. For instance if I ask the tagger to process “I left my heart in San Francisco but I have a New York state of mind” and I’ve defined “San Francisco” and “New York” as both cities, Solr will say so:

  "response":{"numFound":2,"start":0,"docs":[
      {
        "id":"5128542",
        "name":["San Francisco"],
        "type": "city",
        "countrycode":"US"}]
  },
  {
        "id":"5128512",
        "name":["New York"],
        "type":"city",
        "countrycode":"US"}]
  }

excerpt of sample output

Solr’s tagger is a naive tagger and doesn’t actually do any Natural Language Processing (NLP), it can still be used as part of a complete NER or ERD (Entity Recognition and Disambiguation) system or even for building a question answering or virtual assistant implementation.

How the Solr Tagger Works

The Solr Tagger is a Solr endpoint that uses a specialized collection containing “tags” that are defined text strings. In this case, “tags” are pointers to text ranges (substrings; start and end offsets) within the provided text. These text ranges match _documents_, by way of the specified tagging field. Documents, in the general Solr sense, are simply a collection of fields. In addition to tags, users can define Metadata associated with the tag. For example, metadata for tagged “cities” might include the country code, population, and latitude/longitude.

The Solr Tagger is a Solr endpoint that uses a specialized collection containing “tags” that are defined text strings. In this case, “tags” are pointers to text ranges (substrings; start and end offsets) within the provided text. These text ranges match _documents_, by way of the specified tagging field. Documents, in the general Solr sense, are simply a collection of fields. In addition to tags, users can define Metadata associated with the tag. For example, metadata for tagged “cities” might include the country code, population, and latitude/longitude.

The Solr Reference Guide section for the Solr Tagger has an excellent tutorial that is easy to run on a fresh Solr instance. We encourage you to go through that thorough tutorial first since we’ll be building upon that same tutorial data and configuration below.

For a deep dive into the inner workings of the Solr Tagger inner workings, tune into David Smiley’s presentation from a few years ago:

Tagging Entities

Expanding upon the Solr Tagger tutorial with cities, now we’re going to add additional kinds of entities.

Adding a single type field to the documents in the tagger collection gives us the ability to tag other types of things, like colors, brands, people names, and so on. Having this and other type-specific information on tagger documents also allows filtering the set of documents available for tagging. For example, the type, and type-specific fields can facilitate tagging of only cities within a short distance of a specified location.

Using Fusion, we first get the basic geonames Solr Tagger tutorial data placed into Fusion’s blob store and create a datasource to index it (there’s a few other sample entities in another datasource that we’ll explore next):

Before starting this datasource we modify the schema and configuration using Fusion’s Solr Config editor, according to the Solr Tagger documentation. While the Geonames cities data contains latitude and longitude, the tutorial’s simple indexing strategy leaves them as separate fields. Solr’s geospatial capabilities work with a special combined “lat/lon” field, which must be provided as a single combined comma-separated string. Solr’s out of the box schema provides a dynamic *_p (for “point”) field. In Fusion, the handy field mapping stage provides a way to set a field with a template (StringTemplate) trick, as shown for location_p here:

Also, the type field is set to literally “city” for every document in this data source, containing only cities.  We’ll bring in other “types” of entities shortly, but let’s first see what the type and location_p fields give us now:

http://localhost:8983/solr/entities/tag?overlaps=NO_SUB&tagsLimit=100&fl=id,name,*_s&wt=json&indent=on&matchText=true&json.nl=map

“san francisco” is the only known string to the `entities` collection in that text. There are 42 cities known exactly as “San Francisco” in this collection. (there are many more that have “San Francisco” as part of its name, as shown in the Fusion Query Workbench below)

Lots of “San Francisco”s, but only 42 with exact match, and thus taggable with the current configuration.

Visualize Tags While Typing

The final touches here are to allow tagging of text while typing it. A simple (VelocityResponseWriter) proof of concept interface was created to quickly wire together a text box, a call to the Solr Tagger endpoint, and display the results. Typing that same phrase, the Solr Tagger response is shown diagnostically, along with a color-coded view of the tagged string. Tagged cities are color-coded yellow:

Assuming we are in San Francisco, California, at the Lucidworks headquarters, and we want to locate nearby sushi restaurants, we provide a useful bit of context: our location. Narrowing the tagging of locations to a geographic distance from a particular point, we can narrow down the available cities used to tag the string. When the location-aware context is provided, by checking the “In SF?” box, the tagger is sent a filter (fq) of:

(type:city AND {!geofilt sfield=location_p}) OR (type:* -type:city)

This geo-filters on cities within 10km (d=10) of the location provided (the pt parameter, specifying a lat/lon point).

Tagging More Than Locations

So far we’ve still only tagged cities. But we’ve configured the Tagger collection to support any “type” of thing. To show how additional types work, a few additional types are brought in similarly (a CSV in the Fusion blobstore, and a basic data source to index it):

id,type,name
color-1,color,White
color-2,color,Blue
brand-1,brand,White Linen

Suppose we now tag the phrase “Blue White Linen sheets” (add “in san francisco” for even more color):

The sub-string “white” is ambiguous here, as it is a color and also part of “white linen” (a brand).  There’s a parameter to the tagger (overlaps) that controls how to handle overlapping tagged text. The default is NO_SUB, so that no overlapping tags are returned, only the outermost ones, which causes “white linen” to be tagged as a brand, not as a brand with a color name inside it.

End Offset – What’s Next?

Well, that’s cool – we’re able to tag substrings in text and associate them with documents.  The real magic materializes when the tagged information and documents are put to use. Using the tagged information, for example, we can extract the entities from a query string and turn them into filters.

For example, if someone searched for “blue shoes,” and blue is tagged as a color, we can find it with the tagger and extract it from the query string and add it as a filter query. Effectively, we can  convert “blue shoes” to q=shoes&fq=color_s:blue with just a few lines of string manipulation code. This changes the structure of the query and makes it more relevant. For one, we’re looking for blue in a color field instead of the document or product description generally. For two, it can be more efficient for longer queries which generally get carried away by spreading terms across all fields regardless of whether the query terms actually make sense for those fields.

Taking tagging to another level, Lucidworks Chief Algorithms Officer, Trey Grainger, presented Natural Language Search with Knowledge Graphs at the recent Haystack Conference.   In this presentation, Trey leveraged the Solr Tagger to tag not only tag things (“haystack” in his example), but also command/operator words (“near”).   The Solr Tagger provides a necessary piece to natural language search.

Try It Now on Lucidworks Labs

The Solr Tagger example described above has been packaged into a Lucidworks Lab, making it launchable from https://lucidworks.com/labs/apps/tagger/

Erik explores search-based capabilities with Lucene, Solr, and Fusion. He co-founded Lucidworks, and co-authored ‘Lucene in Action.’

The post Using Solr Tagger for Improving Relevancy appeared first on Lucidworks.

Clear Vision Is Vital to Digital Transformation Success

$
0
0

Digital transformation projects must be based on a clear vision that is propagated by the CEO to be successful, our research finds.

In a review of articles and studies on the keys to digital transformation success, there’s near uniformity around the idea that the C-suite in general, and the CEO in particular, must lead the vision.

Tom Puthiyamadam of PwC said recently the CEO is increasingly at the forefront of digital transformation. “I think the concept of the Chief Digital Officer at some point needs to fade because I don’t think a CEO can at any point outsource the most critical movement in their company,” he said. “Leveraging AI, automation, all this new technology innovation—I don’t know how you outsource that to somebody else.”

A MIT Sloan review survey of more than 4,300 managers, executives, and analysts worldwide found that more digitally mature organizations are likely to have the CEO, rather than the CIO, lead digital transformation efforts. This may be because digital transformation is as much a cultural and operational change as it is a technological one.

As this ZDNet article points out, CEOs have the institutional gravitas to push the entire organization in the direction digital transformation requires.

A Digital Transformation Plan Is Key to Success

A plan for transformation, clearly articulated is vital to success. A McKinsey report found that companies where the management team put in place a change narrative for the digital transformation were more than three times more likely to succeed than those that did not.

Additionally, companies were twice as likely to have positive results if senior managers “fostered a sense of urgency” to make the transformation occur.

Finally, companies with clear metrics to assess implementation of the vision had twice the success rate of companies that lacked these measurements.

A comprehensive Harvard Business Review article on digital transformation asserts that strategy must come before anything else, saying, “Figure out your business strategy before you invest in anything.”

While this might seem obvious, many companies skip right to purchasing technology without first understanding what they will use it for.

CIO.com agrees, saying that establishing a “Business Why” for the digital transformation is key to long-term success.

This ‘why’ is aligned to business objectives such as making the organization more agile or adaptable, keeping up with competitors, and attaining greater profits.

Once the vision is in place, the CEO and the rest of the company’s leadership must show clear support for its implementation. As Justin Grossman, CEO of meltmedia writes in this Forbes.com piece, thoughtful planning is the basis of long-term success. He explains, “Savvy teams assess organizational goals, analyze integration needs and evaluate impact before designing (or changing) their digital roadmaps.”  

Digital Transformation Is a Long-Term Process

Many technology experts point out the need to view digital transformation as a process or a journey that will change the business permanently, rather than an instantaneous change.

Deloitte’s “Strategy, Not Technology, Drives Digital Transformation” sums up this idea nicely. The report’s authors found that 80 percent of digitally mature organizations had a clear strategy compared to 15 percent of companies that were still in the nascent stages of digital maturation. “The power of a digital transformation strategy lies in its scope and objectives. Less digitally mature organizations tend to focus on individual technologies and have strategies that are decidedly operational in focus. Digital strategies in the most mature organizations are developed with an eye on transforming the business.”

Thus, before being wowed by vendor stories of the capabilities of the newest technologies, companies must turn inward and establish a clear understanding of why they want to digitally transform in the first place. Without such a vision, companies will be relying more on hope and luck to drive change rather than asserting control over the process.

Dan Woods is a Technology Analyst, Writer, IT Consultant, and Content Marketer based in NYC.

The post Clear Vision Is Vital to Digital Transformation Success appeared first on Lucidworks.

Search is Sexy! (Again?)

$
0
0

You’re not taking crazy pills: Reports of the death of enterprise search are exaggerated. Search is once again in the headlines. Search is back, baby!

But you can be forgiven if you didn’t know it because it’s draped in buzzwords.

Search this, search that, search here, search there, search me, search you…. so much search. It felt like for most of the aughts, all anyone could blabber on about was enterprise search. “Connecting employees to data so they can do their jobs better…” blah blah blah OMG we get it.

Then Google came along and raised everyone’s expectations on what search could be and should be. Gradually those expectations moved into the workplace and enterprise search took another step in sophistication and speed.

Nonetheless, it was such an improvement in what we had that we all settled happily into using Google at home and accepting Google-like results at work.

The Need for Search Was Everywhere

It’s not surprising. The enterprise search industry is pretty long in the tooth, and we were all kinda over all of it by 2010. But search was once fresh, exciting, and new.

Computer science pioneer Karen Sparck-Jones laid the groundwork for the modern search engine in 1972 with her work in natural language processing. She introduced  term frequency–inverse document frequency (tf-idf), which weights terms on how often they appear in a document and in a corpus. Tf-idf remains the underpinnings of the search engine today, and it was revolutionary.

Sparck-Jones’ work, combined with database advancements in the 1970s saw the birth of enterprise search. Query languages followed in the 1980’s, the Internet exploded in the 1990s, and big data happened in the aughts. The need for effective search kept growing.

Tf-idf has its limits though. It relied heavily on word counts to compute document similarity, and that can be slow for large vocabularies. Plus, these methods leave out adding in semantic calculations – the meaning of the words in a document and how they relate to each other. And these days your corpus of data might not be all text. You could have images, audio, video, and other multimedia formats you’re trying to index and understand.

Regardless, vendors and platforms sprang up and attempted (or failed) to meet the challenges companies of all sizes were having getting their employees connected to the data and documents that they needed to do their jobs.

In the last 15 years, we saw client-server model technologies from companies such as Verity, FAST and Endeca advance what’s possible for enterprise search. These search deployments were great for their time, indexing and accessing simple file servers, a web server or three, and data (assuming it was well-structured).

As the technologies matured, things were starting to fall apart at the strategic level. Vendors were charging per document, so licensing costs grew and grew until many organizations threw up their hands and went to open source, driving costs down. Hello Apache Lucene, Solr, and Elasticsearch.

We saw consolidation as Autonomy bought Verity, and HP bought Autonomy. Oracle bought Endeca, Google EOL’ed Search Appliance, and Microsoft bought FAST.

And whether these solutions were knowledge management or intranet search or knowledgebase search or customer support search or expert finders – we still called it all search. You had a box and a query and the search engine came back with a set of results. It was search. And usually it worked pretty well. Sometimes it didn’t. But we were happy with it.

But sometimes analysts get bored and marketers get tired. And that’s when the obfuscation began.

Insight Engines?

Who wanted old, ordinary, plain-Jane search when we could have oh-so-many other things?

Analyst firm Gartner changed the name of the vertical of their famed Magic Quadrant from enterprise search to insight engines. I know I read that and thought, “What fresh hell is this?” and thought, “Analysts gonna analyst.”

Turns out, there was a point to its re-framing. Let’s look at how Gartner described insight engines in the past few years. In their 2017 report:

“Insight engines provide more-natural access to information for knowledge workers and other constituents in ways that enterprise search has not.”

The emphasis is on the access to the information, making it more intuitive and easier. Then in 2018, Gartner revised its definition with:

“Insight engines augment search technology with artificial intelligence to deliver insights — in context and using various modalities — derived from the full range of enterprise content and data.”

This is where Gartner tucks in the use of AI and tilts the focus to the intention of the user – that they want insights from the questions that they ask their data. They want insights that help them do their jobs and take the next best action.

Cognitive Search?

Meanwhile, down in the Forrester PDF mines, analysts were trying to figure out their take – and what they were going to call it. They settled on cognitive search and knowledge discovery solutions. Rolls right off the tongue. Here’s the definition  in the last Forrester Wave for this market segment:

“A new generation of enterprise search solutions that employ AI technologies such as natural language processing and machine learning to ingest, understand, organize, and query digital content from multiple data sources.”

Forrester brought AI into the definition just as Gartner did, but it’s specifically calling out natural language processing and where it is going to be a part of the user’s experience.

Oh God, Here Come the Marketers

And aside from the analysts, at each vendor’s HQ the same ruminating was going on, “Well, what in the world are we supposed to call what we do now?”

This whole enterprise search thing. We don’t want it to be limiting but it needs to evoke the continuing evolution of search across the organization. It can’t be just “Now with AI!”

Or can it?

Here at Lucidworks we’re calling it AI-Powered Search. It’s a perfect Reese’s cup of the word search – with all the historical meaning that we need so that people have some sense of what we do – along with AI-powered to say it’s the new-new thing. It’s like the old thing, but new and better and more powerful and helpful. Uniquely personal. The search and data applications you use at work should be just as responsive and intuitive as Google or Amazon at home.

But is AI even descriptive enough?

We’ve seen the phrase artificial intelligence start to lose its luster, and we see more industry messaging and marketing around machine learning with nods to deep learning starting on the horizon. As customers get more educated, they are getting better attuned to what they need to shop for in their next search platform. AI and machine learning isn’t just a nice-to-have, it’s a critical component of building search systems that index millions of documents and serve thousands of users with dozens of applications.

Search By Any Other Name Is Still Sweet

Stop someone on the street and say, “Excuse me, what’s the word you use for when you’re looking for something among a bunch of other things?” No one is  going to say information foraging, knowledge management, cognitive search, search and discovery, or talk about using an insight engine. They’re going to call it search. Searchity Search Search. Searchy McSearchface. You get the idea.

Search remains the universal interface. Doesn’t require much training. Doesn’t require having to know Boolean operators or SQL structure. If you want to know something or do something you just ask it, and the system takes your query and uses natural language processing to figure out the specifics of what you’re wanting to know – and then gets you the right answer so that you can decide what to do next.

This is being embedded in the internal apps you use at work and the external apps and devices you use once you get home. Most of the apps on the phone in your pocket right now are search-based. Songs. Podcasts. Google Docs. Instagram thots.

If you’re trying to find something in a huge pile of other things, it’s probably a search app. Even if you don’t call it that.

So is search back? Well, it never left but it looks like we’re calling it search again. For now. Maybe in a few months or years we’ll have a new name for it. But it’s still search, same as it ever was.

Andy Wibbels is a published author and has been featured in The Wall Street Journal, USA Today, Entrepreneur, Wired, Business Week, and Forbes. He’s worked at several startups including Typepad, Get Satisfaction, InMobi, Keas, and Mindjet and is currently Director of Marketing at Lucidworks. andywibbels.com

The post Search is Sexy! (Again?) appeared first on Lucidworks.

How to Make Manufacturers More Agile

$
0
0

Digital transformation has come to industry at last — with the aim of making every sort of manufacturer more agile and able to respond to marketplace changes.

Evidence of this Fourth Industrial Revolution (4IR), or Industry 4.0, is everywhere  from Tesla using artificial intelligence to drive cars, to less visible example of using a 3-D printer to make parts for those cars.

Industry 4.0 may not sound intuitive. Brittanica.com describes the Industrial Revolution as “the process of change from an agrarian and handicraft economy to one dominated by industry and machine manufacturing.” Is Industry 4.0, then, about bringing modern processes to farming and Etsy? Well, kind of.

Industry 4.0 is changing processes, practices, people, culture, equipment, and every other aspect of providing goods and services with the aim of responding to market forces quickly and more easily.

Industry, as we describe it today, is the grouping of organizations based on their primary business activities. There is additional breakdown within that classification of primary, secondary and tertiary industries based on whether a business engages in providing raw materials, using those materials to create other goods, or providing those goods and related services to consumers.

Every business along this change should be participating in Industry 4.0. But that participation may vary greatly, depending on the economic section, the specific goods or services an organization provides, and where the company is in the digital transformation process.

We asked six experts how they define Industry 4.0 and how their companies are investing in the processes.

They said that by using smarter technologies, improving machinery, and using people for problem solving, organizations should see gains in efficiency, have more product reach, find ways to expand, and be better able to serve customers — especially with personalized services.   

Industry 4.0 Means Smarter Automation

Industry 4.0 refers to the latest trends in industrial production characterized by the use of interconnected cyber-physical systems (CPS) and advancement in communication systems. Machines are thus able to take physical inputs, calculate a logical decision based on an algorithm, and perform a physical output — entirely without human intervention.

Industry 4.0 currently shows huge potential in the automation and artificial intelligence (AI) space that transcends the usual production applications, including proactive maintenance and overall process improvements. CPS systems can collect historical data and process it to instruct future actions.

Key performance indicators (KPIs) for a successful transition to smarter manufacturing are machine uptime, machine downtime, and mean time between failure. All three KPIs relate to the effectiveness of the equipment which further indicates the health of the manufacturing system as a whole.

Ryan Chan is founder and CEO of UpKeep. In his 20s he saw how maintenance workers struggled to manage their tasks offline. So he learned to code and fixed that problem. 

Industry 4.0. Provides Sales Teams With Better Tools

Industry 4.0 is a new era of growth in the tech industry that is marked by intelligence and connection. There are amazing new technologies, such as augmented reality and visual configure, price, quote software (or CPQ) that are hitting the market all at once. These advanced technologies let companies streamline sales and engineering processes to sell and build more, faster. By simplifying processes, sellers can satisfy their customers in a new way by providing a fast and awesome buying experience.

The most noteworthy advancement, which we are seeing during this transformation, is the use of visualization in the sales process. The ability to place a CPQ solution on an external website allows companies to provide a visual of any product on their website or in a sales meeting. With this 4IR technology, potential customers can see exactly what they are purchasing and how much it will cost them in real time. A process that once took businesses several weeks to complete can now be done in a matter of minutes.

Kris Goldhair, Strategic Account Director and Co-Founder, KBMax. Goldhair has experience in enterprise sales SaaS and consulting professional with industry expertise in CPQ, 3D product configurators, and B2B2C technology.

Human Workers Are Key to Industry 4.0

The standard definition of Industry 4.0 tends to focus on the rise of automation, robotics and smart technology but omits the role for human workers, who play a pivotal role in the success of advanced technology.

We are living in a dynamic era where unlocking and multiplying human ingenuity is central to manufacturing.

There are opportunities to gain efficiency and improve problem solving by having better data.  By investing in machine integration and the Internet of Things (IoT) sensors for machines, you can gain additional data, but you the data still needs to be aggregated and visualized in a way that humans can quickly identify quickly and prioritize what which issues to address first.  

On the manufacturing plant floor, one tech problem can shut down the whole line. By allowing humans to be a part of the equation in real time, tech operates at its fullest capacity, and humans get to keep their jobs.

The human brain is still the most powerful technology on the plant floor. In order to compete, we need to enhance and foster natural intelligence more than we need artificial intelligence.

Keith Barr, CEO and President of Leading2Lean, a lean software that improves manufacturing operations through digital transformation.

Industry 4.0 Includes Personalizing Consumer Products

In an ever-changing technology landscape, manufacturers and machine shops have the opportunity to add new tools to increase efficiency but face the challenge of choosing tech that can scale modernization reliably and cost-effectively.

One successful example is 3D printing which can work well alongside older workflows. Given the low cost, it can also replace old machinery.

Automated 3D printing solutions are already helping scale personalized products for mass market opportunities such as patient-specific models in healthcare to consumer products such as dentures, shoes, and earbuds. New Balance uses 3D printers for custom footwear manufacturing. Dentists create custom-fitted dentures with 3D printers, and healthcare facilities can 3D print body parts to let surgeons practice on before surgery. That’s just the beginning.

Andrew Edman, Formlabs’ Industry Manager of Product Design, Engineering, and Manufacturing. He is a seasoned designer with deep experience in product development from earliest concepts, product specs, and research, through to DFM/A and scale manufacturing.

Smart Manufacturing Includes IoT

Smart factories combined with cloud infrastructure that uses IoT to automatically connect every person and machine together in a bi-directional feedback loop connects the factory floor with customers via a single digital thread to ensure quality, delivery, speed, and accuracy.”

For example, software-powered automation can improve production processes of printed circuit board assembly (PCBA) prototypes and  save design time, build cost, and lead time.

This process helps employees optimize their time by telling them where to be, when, and what step in the process needs their attention. Moreover, using IoT networked devices to connect every person with every machine, streamlines operations, improves collaboration, and simplifies remote management and control.

Shashank Samala, co-founder and VP of Product at Tempo Automation, electronics manufacturer for prototyping and low-volume production of printed circuit board assemblies.

Key to Industry 4.0 Is Automation and Connectivity

Focusing on automation of the part sourcing process makes manufacturers more cost competitive and reduces their time to market. AI-powered tools instantly can predict the cost and lead-times of manufacturing for 3D printing, computer numerical control (CNC) machining, and injection molding. Creating a global network of connected manufacturing services, the smartest manufacturing network on the planet, will optimize the global supply chain.

Filemon Schoffer, Co-founder & COO, 3D Hubs, with a background in both Physics and Industrial Design Engineering Filemon aims to contribute in moving the 3D printing industry forward.

 

Evelyn L. Kent is a content analytics consultant who specializes in building semantic models. She has 20 years of experience creating, producing and analyzing content for organizations such as USA Today, Tribune and McClatchy.

The post How to Make Manufacturers More Agile appeared first on Lucidworks.


Chatbots For Shopping, Banking & More

$
0
0

When chatting with a brand, customers are looking for instantaneous answers. In fact according to Chatbots magazine (yes, there actually is one), real time messaging has impacted not only personal lives, but also business relations. And what Harvard and Inside Sales discovered is you have five minutes to respond to a customer before you are at risk of losing the lead.

As a result chatbots are quickly rising in popularity. This year’s Activate Conference will include training sessions and panels on natural language processing, recommendation engines, and the future of voice search. What are all these components a part of? None other than, chatbots.

And just like people come in different personalities, humor, and expertise, bots can be individualized to best serve your unique audience.

Simple rules-based chatbots are nothing but pre-recorded scripts and decision trees. They ask yes no questions and then route you down a tree. These aren’t really bots, because they aren’t learning.

AI-Based Chatbots

On the other hand, AI-based chatbots provide responses based on a combination of predefined scripts and machine learning applications. Advancements in NLP and insight automation enables customer’s unique data, not just generic scripts, to be included in the knowledge database that a bot pulls from.

People want their shopping experiences to be personal, to understand their likes, their style, and to have an intuitive shopping experience. The best bots will be know-it-alls, programmed with the brand’s personality, that take users to their “next best action” as quickly as possible.

One of my favorite bots is Levi’s Indigo. Mode.ai partnered with Levi’s back in September 2017 to build a virtual stylist named Indigo, with the intent to help women find their perfect pair of jeans. Because of its success, they’ve now enhanced Indigo’s capabilities into ‘find an outfit’, search products, get help and my social favorite, ‘see what’s up at Levi’s’ that share stories about the company’s sustainability efforts.

 

The fashion industry isn’t the only one relying on chatbots for customer support and online shopping. For example, Bank of America recently developed a bot named Erica that helps their 67 million customers find important documents without human assistance.

 

Where to focus? Voice or Text?

We know that millennials are choosing texting over calling more than previous generations, but with over 100 million Alexa devices sold as of January, talking isn’t going out of style anytime.

However, there are pros and cons to consider for both. A text chatbot is only able to interact strictly through messaging, whereas a voice bot needs to be an expert at understanding the voice of the questioner to then translate into text. A text bot can be merged with SMS, Social Apps and emails, which are widely used and the most convenient form of a bot. But, a voice bot is very attractive to users that prefer hands free communication, like when you’re driving a car.   

Regardless of the bot your building, you’ll be able to learn more about its unique components at Activate 2019 happening in D.C. from September 9-12. Be sure to register today at https://activate-conf.com/.

Liz O’Neill is a Senior Solutions Manager, Digital Commerce at Lucidworks. She has a focus and passion in retail and technology, particularly in enabling collaborative and effective strategic partnerships.

 

 

The post Chatbots For Shopping, Banking & More appeared first on Lucidworks.

Lucidworks Named a Leader in The Forrester Wave™: Cognitive Search, Q2 2019

$
0
0

According to the technology research and advisory firm Forrester, “Employees and customers have an insatiable need for information. Cognitive search delivers it.”

And Lucidworks is leading the market.

After scoring a dozen vendors across more than 20 criteria, Forrester named Lucidworks a Leader in the cognitive search market, giving top marks to our strong product, strategy, solution roadmap, customer success, and partnerships.

This year we made a two-tier leap from Contender to Leader, and Forrester says we are one of the “providers who matter most” based on our strong scores for current offering and strategy.

Whether you are working to improve insight discovery to enable your digital workplace, or want customers to enjoy a better digital commerce experience with your brand, cognitive search solutions offer a path to greater employee and customer engagement.

“Search is the universal way to access the information that powers our every day,” said Lucidworks CEO, Will Hayes. “We’ve built our solution to deliver a more delightful online shopping experience to customers and to give people access to the data and insights that empower employees to make smarter business decisions.

“We’re providing our customers with AI-powered search to solve their biggest data problems for the world’s largest companies so they can receive the most value possible from their information.”


Register today to join Mike Gualtieri, Forrester Analyst, on June 26 for a webinar to learn more about the cognitive search solutions on the market today, and for an in-depth look at this year’s Forrester Wave™.


We believe Forrester’s recognition is proof that our product addresses the needs of today’s enterprise search customers, backed by a strong strategy capable of addressing the future needs of this maturing market.

Thirty-four of the Fortune 100 would agree with that assessment, including customers such as AT&T, Honeywell, Morgan Stanley, Red Hat, Reddit, Staples, Uber, and the U.S. Census Bureau.

We’re proud of our growth over the past year and look forward to continued innovation, delivering leading cognitive search capabilities as our customers’ needs evolve.

A complimentary copy of Forrester’s 2019 WaveTM for Cognitive Search research report is available here: https://lucidworks.com/ebook/forrester-wave-2019/.

The post Lucidworks Named a Leader in The Forrester Wave™: Cognitive Search, Q2 2019 appeared first on Lucidworks.

Search Was Everywhere at Gartner’s Digital Workplace

$
0
0

In Orlando, last weekend, Gartner held the first US edition of its Digital Workplace summit. Over two days, around 650 attendees met, mingled and learned about how digital transformation initiatives are changing the nature of work. Topics ranged from increasing engagement with employee culture to the future of work and distributed teams.

As a sponsor of the summit, our VP of product marketing, Justin Sears, and senior Solution Engineer, Andy Tran, spoke on how AI-powered search is a critical technology powering the modern digital workplace.

Search? Really?

Nick Drakos, VP Analyst at Gartner, spoke about collaborative work management, how tools like Slack and Microsoft’s Team are pushing communication beyond just information sharing and productivity improvements to joint innovation. Listening to his talk was illuminating, especially so when considered in the context of his colleague, Senior Director / Analyst Marko Sillanpaa, who specializes in Content Services (formerly ECM) and corporate legal content technologies.

Sillanpaa spoke on fighting information silos by using intelligent software to federate and aggregate data in one place, extract insights from it, and deliver them to people when they can make use of them. In other words, a search engine.

Both of Drakos’ and Sillanpaa’s talks were fundamentally about empowering knowledge workers with information and insights by understanding intent — so employees can find the specific data and documents they need to do their daily work. Left unsaid in both of these sessions is that the key enabling technology for understanding a user’s intention is search.

We talk at Lucidworks frequently about search being the universal or perfect UI. It’s perfect because its dead simple to use; anyone can use it. It requires no instruction manual or learning arcane Boolean commands or witchcraft. But most importantly, search hides an enormous amount of complexity from the user so they can focus on the task at hand and less on mastering the tools to complete it.

Whether we’re talking about crawlers that federate data from multiple content and data silos or machine learning that determines how to categorize and classify content so workforce productivity tools like Slack can quickly and clearly connect people to each other, much of the technology that underpins the digital workplace is good ole search.

Though enterprise search has still never fully lived up to its promise of delivering Google-like precision to the modern workforce, we have reached a new threshold: understanding user intent to deliver enormous value to digital workers.

Our flagship platform Fusion dramatically reimagines enterprise search as the foundation for driving the insights-driven workplace. As Justin and Andy explained in their talk, Lucidworks Fusion hyper-personalizes the employee experience and finally does what enterprise search was never able to do.  

This is finally possible because of three key shifts:

  1. We can capture user interactions and understand user intent at scale and in real-time for the first time ever.
  2. Cheap storage and powerful GPU chips let us apply ML at very large scale to crunch trillions of digital interactions to augment conventional plain text-based relevance.
  3. AI has escaped the lab and is widely available powering production-ready workloads across the organization and up and down the org chart.

As Justin referenced in his talk, Forrester has found that companies with happier employees enjoy 81 percent more customer satisfaction and have half the employee turnover. Companies make happier employees when they give them the tools that let them do their jobs with less pain, and more enjoyment.

The best way to achieve that is by giving users insights when they need them, sometimes before they even know what they are looking for. Only an AI-powered search platform like Lucidworks Fusion can do that.

A music and news junkie and full time book-hound, Vivek Sriram has lucked out turning his love of looking for stuff into a 15+ year career in dreaming, building and marketing search engines. As CMO at Lucidworks his job is to turn the rest of the world on to mysteries and joys of search engines.

The post Search Was Everywhere at Gartner’s Digital Workplace appeared first on Lucidworks.

Use Real-Time Data to Track Digital Transformation Progress

$
0
0

One often overlooked factor in successful digital transformation journeys is the integration and use of real-time data and analytics to establish metrics and track progress.

This is especially important because numerous studies, including this one from McKinsey, have found that 70 percent of the digital transformation efforts by companies fail.

As pointed out in Telstra’s “Disruptive Decision-Making” report, digital transformation must involve the rethinking of how a company puts technology to use. New access to huge amounts of data will be at the center of those decisions. So, too, will new technology that is implemented to upgrade user experiences.

But digital transformation is not just about changing the external facing technology of a business. It’s also about using technology to change a company internally. And based on a review of reports and articles about digital transformation, companies are often failing to use data to measure how well their digital transformation is proceeding.

The Need For Metrics

As pointed out in this Forbes article, for companies to succeed in digital transformation, they have to change the metrics that executives use to measure company performance. The article’s author, Peter Bendor-Samuel, CEO of Everest Group, recommends using a venture capital process and mind-set to improve progress, where “leaders make capital available, and sprints or projects that are completed, draw down on that capital.”

That allows companies to see what they’re spending on digital transformation and how well those projects are faring. Companies can even establish journey teams made up of senior executives to monitor progress through the use of metrics agreed upon by all those in the company.

These metrics should be specific and could include:

  • The data sources participating in an integrated search.
  • The number of entities that were consolidated during the transformation.
  • The rate of operationalization of transformation projects.
  • The amount of money being spent on projects.
  • The additional time staff members have to focus on larger goals after the automation of tedious tasks.

The metrics must be company specific but are a strong foundation to make digital transformation possible.

How Metrics Ensure and Defend Digital Transformation Success

This McKinsey study on digital transformation success found that without expansive metrics, companies might achieve temporary improvements that are not sustained over the long-term. The study’s authors write to make success permanent, “The first key is adopting digital tools to make information more accessible across the organization, which more than doubles the likelihood of a successful transformation … [and] an increase in data-based decision making and in the visible use of interactive tools can also more than double the likelihood of a transformation success.”

The survey found that organizations that established clear targets for key performance indicators that were informed by accurate data were two times more likely to have transformation success than companies that did not. Additionally, companies that established clear goals for their use of new technology improved their chances of success by a factor of 1.7. Such goals and metrics must have real-time data so companies know whether or not they are on track.

Real-Time Analytics Prove for Fast Adjustments

In order to make speedy adjustments, metrics must be based as much as possible on real-time data. It might seem obvious that in a world where data increasingly drives decision-making in almost every realm, using it for internal progress checks would make sense. But far too often, companies are not using data in this regard.

In an article on the “three p’s of digital transformation”, Billy Bosworth, the CEO of DataStax says that, “While 89 percent of enterprises are investing in tools and technology to improve their customer experience initiatives, too few are relying on real-time data to inform decisions.”

He goes on to write that “… the most important performance metric is the impact on customer experience — which translates into increased retention and revenue. Brand value and Net Promoter Score can also be non-revenue and non-cost vital metrics to track performance.”

Real-time data is vital to make sure these types of metrics can be accurate and informative to the business.

A recent Harvard Business Review article brings this point home. The article’s four authors all have extensive experience in various industries. One of them, Ed Lam, the CFO of Li & Fung, shared his firsthand experience of what led to digital transformation success at his company. The authors note that Li & Fung created a three-year transformation strategy geared toward improving the use of mobile apps and data in its global supply chain. The company then used real-time data to measure progress.

The authors wrote, “After concrete goals were established, the company decided on which digital tools it would adopt. Just to take speed-to-market as an example, Li & Fung has embraced virtual design technology and it has helped them to reduce the time from design to sample by 50 percent.

“Li & Fung also helped suppliers to install real-time data tracking management systems to increase production efficiency and built Total Sourcing, a digital platform that integrates information from customers and vendors. The finance department took a similar approach and ultimately reduced month-end closing time by more than 30 percent and increased working capital efficiency by $200 million.”

The benefits of such a strategy and integration of real-time digital transformation metrics can thus be profound. Digital transformation is now an imperative for companies worldwide.

For instance, IDC estimates that spending on digital transformation will grow from $1.07 trillion in 2018 to $1.97 trillion in 2022. The World Economic Forum recently echoed this conclusion, estimating that the overall economic value of digital transformation to business and society will exceed $100 trillion by 2025.

It’s not a matter of whether companies engage in digital transformation, but rather how. Using real-time data to inform metrics about progress is a crucial step in this process.

Dan Woods is a Technology Analyst, Writer, IT Consultant, and Content Marketer based in NYC.

The post Use Real-Time Data to Track Digital Transformation Progress appeared first on Lucidworks.

Average Order Value Down? Endeca Could Be the Culprit

$
0
0

If you are seeing your Average Order Value (AOV) flattening or declining, and you’re still using Endeca, your search engine could be to blame.

AOV is total revenue divided by number of orders. If your AOV is dropping, that means your loyal customers are spending less and less with each purchase. That’s a real problem, says Richard Isaac, CEO RealDecoy, a leading enabler of ecommerce for brands like American Express, Samsung, Honeywell, Coach, and a long-time integrator of Endeca.

“In my experience, most retailers overinvest in customer acquisition while under-investing in growing average order value and conversion rates,” says Isaac. He estimated that the cost to acquire a new customer is 9x the cost of retaining an existing one.

“Top retailers agree that digital commerce search has a huge impact on their cost of service, average order value, and customer loyalty. But many of them struggle with obstacles like budgets and technology that does not (and may never) meet their needs. Most importantly, they lack a data-driven approach to making enhancements,” he explains.

Today’s ecommerce shoppers expect search to be personal and precise–to provide them with exactly the info they need, when they need it.

Customers are not really interested in your top 10 guesses about what they might want. They just want the top one or two items that fit exactly what they need. And when that doesn’t happen, customers get frustrated.

Today’s retailers try to meet this demand for a personal approach to ecommerce with more words. They provide catalogs with many thousands of products that have complex and specific descriptions, hoping that customers will have the patience to wade through it and find what they need. But what if they do not show such patience?  

Endeca’s Keyword Search Is No Longer Enough

Endeca’s simple keyword search is no longer enough to keep up with the volume, velocity, and variety of products and words that retailers must manage if they don’t want to leave money on the table.

Most teams using Endecaknow they must act smarter and faster, yet Oracle isn’t innovating the Endeca product and support is inadequate. The question isn’t “whether” to transition, but when and how. When do the benefits of switching outweigh the costs and potential for disruption? That time has come.

Average Order Value and Conversion Success

Leading retailers like Lenovo have recently upgraded their search capabilities and as Marc Desormeau, Lenovo’s Senior Manager of Digital Customer Experience put it, “Since the migration [to Fusion from Endeca] we’ve seen a 50-percent increase in conversion rates and other key success metrics for transactional revenue.”

So what do folks like Desormeau and Isaac look for in a modern solution? First off, they want newer and better search algorithms that allow customers to find what they need on the first try. Secondly, they seek a modern scalable architecture that allows them to do more with their data and handle changes in real-time. If they need a full re-index of their catalog and site content, they want it to take minutes not hours. Just like you, they want their customers find what they need, when they need it, however they describe it!

Retail Trends in 2019

With retailers scrambling to try and find ways to improve the customer experience, don’t forget to look at site search analysis. The right analysis will help you figure out user intent.

So the right solution should learn from customer search behavior. As customers click on different results, those results should be boosted. If an individual customer trends towards certain types of products, they should automatically see more of what they’re interested in, without a merchandiser having to create a specific rule or customer segment.

Turning on and Configuring Signals

Finally, Endeca users have relied on thousands of difficult-to-maintain merchandising rules in order to personalize, customize and optimize the customer experience. Modern solutions use AI and machine learning techniques to do the bulk lifting. They save predictive marketing tools (including rules) to do the more specific tuning. This means tens or hundreds of rules instead of thousands. And it means a lot less drudgery for merchandising teams.

Predictive Merchandising in Fusion 4.2

Retailers who aren’t moving from Endeca quickly enough will not be able to take advantage of signals, relevancy tuning, and machine learning. They will be leaving money on the table. In fact, those that do move typically capture enough additional revenue to pay for the migration in as short as a few months.

Learn more:

The post Average Order Value Down? Endeca Could Be the Culprit appeared first on Lucidworks.

Viewing all 731 articles
Browse latest View live