Quantcast
Channel: Lucidworks
Viewing all 731 articles
Browse latest View live

What Does Search Have to Do With Intelligent Machining?

$
0
0

Since the Industrial Revolution, manufacturing has seen a continuous march towards increasing levels of automation and the integration of machine intelligence. This progress in artificial intelligence and digital manufacturing has lead mainstream publications such as the Atlantic to ask: “If machines are capable of doing almost any work humans can do, what will humans do?”

And the answer, of course, is: “everything else.” This includes among other things: design and ideation as well as problem-solving defects and faults. In order to do “everything else,” the humans need information about what, how, and where as well as information about the machines they’re working with and the things they’re creating. Finding any and all of that—in time and with the right context—is the very nature of search.

The Digital Thread

Manufacturing is by definition a creative endeavour. When you include the design element and consumer demand elements, it is on par with art or music. Instead of a symphony though, manufacturers make things that people use every day.

Today’s smart factories operate along a digital thread with a process that, roughly speaking, looks like this:

  • A designer creates a “digital twin” in CAD software.
  • The design is then simulated and stress tested. For instance, UnderArmor simulates how fabrics will hang on a human model.
  • A “build plan” is created and simulated. This basically details what has to be done to realize the design.
  • The design is then fabricated using a series of tools, robots, and processes.
  • Finally the creation is verified, possibly modified, and rescanned.

(Find out more on Deloitte’s excellent summary “3D opportunity and the digital thread.”)

So what does this have to do with search? Let’s look more closely at the process.

visualization of digital thread

Credit: Deloitte

Intelligent Machining

Traditionally, tooled manufacturing involved an operator controlling a machine like a cutting device or an additive process and then making decisions based on analog or digital sensor readings. Automated manufacturing, or intelligent machining, requires a finer degree of data and computer algorithms. These are essentially mathematical expressions of operator activities1.

The sensors in all of the intelligent machines help form a series of closed loop systems and an overall closed-loop manufacturing process. Different machines perform different manufacturing tasks from cutting to 3d-printing out components. Sensors and an overall build plan drive each machine and the overall process. This includes detecting faults but also control functions, like positioning a component about to be pressed, drilled or cut. At each step, quality variations can be detected, processes can be adjusted and better designs can be made.

All of this involves the creation of data. That data has to be found to be used — and finding things requires intelligent search.

Manufacturing as an Enterprise

Manufacturers do not have the luxury of just making things. Manufacturers must also market things, sell them to customers, and manage relationships with suppliers and usually other manufacturers.

Watch our previously recorded “AI for Omnichannel Retailers” in order to understand how recommendations can be used to close deals.

Next, if there could be one axiom for all of manufacturing history it might be that nothing is perfect and all imperfections are defects. Each component from the supply chain or created locally will likely one day undergo modification or revision. Some of the finished products will come back from the customer as returns. To implement any kind of necessary quality control, it is essential that those products be linked with their components and those components with their revision numbers and those revision numbers with their digital twin and the plan and processes which created them.

Keep your eye out for our upcoming “Product 360 Webinar” to learn more about how manufacturers manage this data.

Search in Manufacturing

Notice that there are still people in the digital thread diagram? Those people consume data, they have questions, and need to understand procedures and events. This data has to be found and contextualized. With today’s volumes this necessitates “smart search” that uses AI to help personalize results.

Numeric data like sensor data and return data are coming together with textual data like component data and process and procedural descriptions in order to make better decisions and implement better more predictive quality control measures.

Search is the glue that keeps the digital thread together by making information accessible to both the people and the machines involved in the manufacturing process. If they can’t find it, they can’t do it.

Learn more:

The post What Does Search Have to Do With Intelligent Machining? appeared first on Lucidworks.


How IoT & Industry 4.0 Relate — and Why Manufacturers Should Care

$
0
0

Two of the hottest buzzwords in real world technology are Industry 4.0 and the Internet of Things (IoT). Sometimes they are used almost interchangeably, but while there is an overlap, they are not synonymous. I’ve written a bit about them in the past, but here’s the skinny on what they are, how they are related and what smart manufacturers are doing to leverage the data they both produce.

Only 20% of consumers understand the term Internet of Things yet nearly 70% own an IoT device
Metova Survey

What Is the Internet of Things?

The Internet of Things involves adding digital sensors and networking technologies to the devices and systems that we use every day in the analog world. Some of the most well-known consumer examples are Nest and Ecobee smart thermostats and Amazon’s Alexa-powered devices including the Echo smart speaker. Smart thermostats have sensors in multiple rooms, and connect to your phone and the Internet to allow you extended control over the temperature. They can also be connected to algorithms to control the temperature when you’re not home or based on weather patterns. They can even “detect” when you leave.

By 2020, 30% of our interactions with technology will be through “conversations” with smart machines.
Gartner

The Internet of Things extends well into commercial and municipal uses as well and can include everything from building temperature control systems to sensor enabled trucks to digital manufacturing control systems. Intel has a great explainer video on the Internet of Things that covers additional uses.

What Is Industry 4.0?

Industry 4.0 refers to the use of automation and data exchange in manufacturing. According to the Boston Consulting Group there are nine principal technologies that make up Industry 4.0: Autonomous Robots, Simulation, Horizontal and Vertical System Integration, the Industrial Internet of Things, Cybersecurity, The Cloud, Additive Manufacturing, Data and Analytics, and Augmented Reality. These technologies are used to create a “smart factory” where machines, systems, and humans communicate with each other in order to coordinate and monitor progress along the assembly line.Networked devices provide sensor data and are digitally controlled.

Nine technologies affecting Industry 4.0 visualization

The net effect is the ability to rapidly design, modify, create, and customize things in the real world, while lowering costs and reacting to changes in consumer preferences, demand, the supply chain, and technology. So how are Industry 4.0 and IOT related exactly?

Industry 4.0 uses an Internet of Things, or at least an Intranet of Things, in order to perform digital manufacturing. All devices, robots, simulations, and tools have sensors and provide data.

Additionally, no manufacturer is an island as nearly every manufacturer has a supply chain which in turn has its own tools, and its own data, processes, and network. Bringing each of these networks together into a bigger Internet of Things promises to allow the entire supply chain to react more seamlessly to the market. This networked information sharing will help address long standard manufacturing problems like the Bullwhip Effect or tracing quality issues down a supply chain.

What Challenges Do the Internet of Things and Industry 4.0 Pose for Manufacturers?

Given that both Industry 4.0 and IoT demand linking together previously independent devices and systems, it isn’t surprising that a chief shared concern is security. As the trend of using smart devices increases, it will be harder to track breeches and manage all of those devices (Khan/Turowski p6). Industry is moving quickly to address these security concerns, melding new technologies with standard IT security technologies like network security and encryption.

network security and encryption illustration

Another hurdle for both IoT and Industry 4.0 has been the lack of standards. Having a bunch of smart devices is great, but if they all record data in their own format and require their own protocol, integrating them into an automated factory will be cost prohibitive and difficult. Manufacturing giants like Bosch, the Eclipse Foundation, and others have been working on standard communication protocols and architectures like OPC UA, MQTT, and PPMP. These all aim to help smart devices, including those on the factory floor, communicate with each other and provide common data formats. But more data formats can mean more difficulty in creating one data model.

There are additional challenges for Industry 4.0 and IoT from talent development to IT integration as well as the mere immaturity of some of the technologies. As with any great transformation, there will be transitions that hybridize older methods and technologies with the new, along with risks and rewards.

horsecar

A “horsecar” – a trolly on a track drawn by a horse. A hybrid technology in place contemporaneously with trolleys, early automobiles, and trains.

What Role Does Search Play in Industry 4.0 and IoT?

Industry 4.0 will generate enormous quantities of data. Gathering, analysing and processing such big data will generate new insights, support decision-making and create a competitive advantage.
Deloitte

Search is the original Big Data technology and IoT and Industry 4.0 will create massive amounts of data. Being able to ingest that data quickly to find and analyze that data efficiently and effectively will be perhaps the greatest challenge and opportunity of Industry 4.0. In addition, providing appropriate information to humans in the process permeates many of the challenges of Industry 4.0. Whether it is a shortage of talent or organizing new talent, success will require presenting them with the relevant information they need, personalized to them and their task at hand.

Moreover, Industry 4.0 will produce a lot of stuff from digital twins to actual components. It will be essential to track, organize, understand, and find those components in context. In other words: if smart devices are the fingers, arms, and limbs, and the Internet is the nervous system, then search is the the brain behind Industry 4.0 that makes everything work together.

Two key examples of Industry 4.0 search include

  • Product 360 – used to understand all of the components of a product and their fault data.
  • Enterprise Search / Knowledge Management – used to ensure that each person from marketing to design to quality control can find the relevant information they need from procedures to specifications to models.

Learn more:

The post How IoT & Industry 4.0 Relate — and Why Manufacturers Should Care appeared first on Lucidworks.

How Open Source Is Driving Industry 4.0

$
0
0

Open source hardware, software and hybrid solutions are driving Industry 4.0. Manufacturers are using open source hardware and open source software technologies to improve interoperability, drive innovation, and cut costs. The value of open source in manufacturing is recognized by both industry startups and industry stalwarts like the German engineering and electronics company Bosch. Here is what you need to know about open-source.

Open Source Software implementations provide an easy adoption path, near-perfect interoperability with others, and reduces the cost of entering the market.
-Bosch

What Is Open Source and Why Is It Useful?

In manufacturing, open source takes two forms:

  1. Open source software is freely redistributable and modifiable for purpose.
  2. Open source hardware is based on a documented, freely redistributable and modifiable design.

In other words the idea is to share “Intellectual Property” (IP) in order to share R&D costs and drive innovation. A few years ago this idea would have seemed absurd. However, as technology progresses, ideas become “commoditized,” in other words there isn’t a lot of profit to be made in say, basic thermometers. The patents have long run out and even relatively primitive thermometers are good enough for everyday use. If a more complex product requires embedding a thermometer, having a cheap, basic design that can be shared among manufacturers and customized for use allows costs to be shared and designs to be fit for purpose.

In software, it is the same idea. The basic underlying file-level index has become commoditized. Our company, Lucidworks develops products on top of this engine that extend its functionality to provide modern high-end AI-powered search. We even collaborate with competitors on the underlying technology.

What Solutions Are Provided by Open Source Hardware?

open source hardware

Affordability is a major driver of open source hardware. Once a product for the education and hobby market, the Raspberry Pi and the various Arduino devices are now used in Smart displays from NEC, network monitoring tools and camera equipment. The low cost and widespread availability of these devices encourages their utility in a variety of applications. Another example is Facebook’s Open Compute Project which tech companies like Intel, financial companies like Goldman Sachs and manufacturers like Schneider Electric collaborate on datacenter components. Compared with traditional providers the pace of improvement is faster and the costs are substantially lower.

Prototyping and customization are inherent advantages of open source hardware. When creating a new product, the ability to create a virtual digital “twin” and rapidly create and modify a prototype are essential for evaluation and testing. The faster that process, the faster the time to market. Cheap, easily modified, easily programmed commodity computing components are essential to today’s smart devices. Even if ultimately a more custom-use chip is used in the final product, having these open source tools on hand at the start can facilitate rapid iteration. Because products are often modified, there is usually a 2.0 released shortly after the first version, it use useful to have easily re-programmable components and developer tools.

Supply chain and collaboration are improved through the use of open source hardware. Inexpensive open source hardware can be sourced from multiple suppliers and in-sourced if, for required, by a legislative or tariff environment. This can help manufacturers improve the robustness of their supply chains. When every vendor on the supply chain can see a design, they can help improve both the design and the way it is created.

Businessman is pressing button on touch screen interface in front for Logistic Import Export background, the concept of communication network internet of things and logistic.

Recruiting talent is easier for users of open source technologies. Recruiting and developing talent is identified by most experts as a key challenge to Industry 4.0. Open source hardware and commodity components are easily accessible to colleges, technical schools and Universities. In many cases they provide standard APIs and tools for developing and configuring them.

In Industry 4.0 sensors are everywhere and open source hardware enables them. Instrumentation is one of the most important components of Industry 4.0. In a closed loop manufacturing environment everything is monitored and inspected. Doing that requires a lot of sensors and tooling. Moreover, in a dynamic line that might change, the tooling may change, the sensors may change and being able to repurpose and reprogram a component and acquire inexpensive sensors and tools is not a luxury but a necessity.

What Solutions Are Provided by Open Source Software?

open source

Scalability is critical to Industry 4.0 and IoT. With those sensors everywhere and all of the devices creating data manufacturers depend on their solution to scale in terms of affordability but also technical scalability. Solutions need to handle “that much data” and open source solutions like Apache Solr, Spark, Kafka and others are exactly the kind of innovations necessary to handle that much data often with real-time considerations.

Interoperability and Standardization is a major driver for Open Source adoption at Manufacturers. Even modern factories are a combination of older machines, newer machines and tools from different vendors. Creating a closed loop system is not possible unless the data can be rationalized into similar formats and communicated across standard protocols. That is part of why a number of manufacturers and governments have been working together with collaborations such as the one at the Eclipse Foundation. These standards include protocols like MQTT as well as their implementations. Standard like these allow manufacturers to wire together factories even if they have equipment from multiple vendors.

Analytics, AI and Machine Learning are virtually dominated by open source. Whether it is Apache Spark or deep learning tools like Pytorch, open source tools and technology dominate the analytics, AI and machine learning space. Manufacturers looking to make sense of both sensor data and more text and descriptive data rely on open source software to do it.

Low angle view of two engineers working on robotic arm in laboratory. Woman is using digital tablet.

People are still everything so search is still everything. Even smart factories are run by people. Both people and machines need data specific to them and the context of their task at hand. Today’s search is powered by open source engines like Apache Lucene and Solr. These tools make the right data available to both people and machines and can be used in near real-time contexts.

Most Solutions Are Hybrid Solutions

Heart shaped by human and robot hands. Isolated on white background. 3D illustration.

Smart factories aren’t all based on open source hardware. Manufacturers have secrets and IP that they have to protect and can’t share. Open source software, while powerful, often tends to have sharp edges particularly around usability and operational concerns. Meanwhile it is hard to buy purely proprietary software these days, even the least open platforms tend to still use open source technologies.

So what is the solution? Find open technologies and platforms combined with solutions from innovative vendors that make these tools and technologies more accessible and usable.

For hardware, typically open source solutions communicate with and are combined with more traditional equipment. Most Industry 4.0 projects are “brownfield” rather than “greenfield” meaning that newer technologies will be added to existing factories. This may involve integrating an IoT gateway to work with existing protocols, aggregate data from existing PLCs and transmit the data using standard protocols.

For open source software such as search, manufacturers may start out using Apache Solr but later may find Lucidworks Fusion easier to use, feed and operate. Fusion also includes AI and machine learning technologies built on Apache Spark. This hybrid technology combines the advantages of open source with pragmatic solutions to make it easier to use and even more powerful.

People are the engine behind Industry 4.0. Open source hardware and software are the fuel that are helping to make that smart innovation happen.

Next Steps:

The post How Open Source Is Driving Industry 4.0 appeared first on Lucidworks.

AI Is Hot! So Why Are Companies Slow to Warm to It?

$
0
0

The human brain is a marvelous machine that scientists, philosophers, and mathematicians have sought to emulate for centuries. If a human can do it—with its emotional flaws and such—couldn’t a machine do it better? If the money being shoveled into artificial intelligence (AI) startups is any indication, the answer is yes.

Gartner declared AI the top trend for 2018. And you would be hard-pressed to go to any event and not see AI getting top billing (our Activate conference included). When you throw in Machine Learning (ML), Deep Learning (DL), Natural Language Processing (NLP)—well it’s possible to assume that AI and its subsets are top of everyone’s mind.

Yet despite its ubiquity, organizations still struggle to implement AI. According to Adobe, although 31% of enterprises are expecting to add AI over the coming year, only 15% are using it today. Even so, Gartner says that by 2020, 50% of organizations will lack sufficient AI and data literacy skills to achieve business value.

If the promise is so phenomenal—why is AI such a second class citizen? We asked four AI experts to share their thoughts on what is holding organizations back. Of course data and legacy infrastructure are two of the challenges, but surprisingly, so is talent. Because while AI is allegedly supposed to replace humans—humans are still very much part of the, ahem, equation.


Data, ROI, &  Scalability

I actually see AI being used quite readily in smaller tech companies. It’s usually part of their core stack as the people starting these companies are usually the AI experts or AI experts who pair with domain experts. What I have consistently witnessed is that more established companies have a much harder time adopting AI—especially AI in the context of intelligent search, machine learning, and natural language processing. A few reasons why I think adoption is slow:

1. Lack of good internal infrastructure
The basis for developing good ML and NLP models is DATA. Whether its labeled or unlabeled data or search logs, organizations need to readily have a good data store in place for data scientists to explore and build models. Creating a highly accessible data store is a heavy investment and requires lots of data engineering time to make things happen.

While a data store is one part of the story, a model deployment strategy is another. How do you get from raw data in your data store to making predictions of fraud activity on your website? This is yet another investment where companies need to ensure that there is a clear engineering path for getting your models from prototype to production.

2. Confusion at the management level on applicability of AI
What AI or machine learning can REALLY do is be kind of a black box to most product managers and decision makers. Unless you are working with it day in and day out, the concepts can seem intimidating and it’s not immediately clear how these technologies help.

This causes a general confusion as to where AI and machine learning should be used to get the best ROI on dollars invested. I’ve seen countless hours wasted on solving problems with machine learning when all it really needed was several if-else rules.

3. The hype factor
Let’s talk data-related hypes. We have seen everything from big data to data science to machine learning. We are currently in the AI and deep learning hype phase. There will always be hyped up technologies and some companies tend to get really caught up in it. They try to fit their data and automation problems into the mold of the current hype. That just doesn’t work.

You could potentially run into scalability issues due to the complexity of the new technology or the technology limits you in such a way that it works only on a handful of use cases. These are all based on true events.

Kavita Ganesan
Kavita Ganesan is a Senior Data Scientist at Github and holds a Ph.D in Text Mining, Analytics and Search. She has over a decade of experience in building scalable Machine Learning and NLP models for various companies she has worked for including eBay, 3M and GitHub. In 2017, Kavita led the launch of the first production scale NLP and Machine Learning pipeline at GitHub with the release of GitHub topics touching millions of repositories.


AI/Machine Learning Talent & Transparency

Artificial Intelligence has extensive, conceivable applicability. However, we’re still in the initial stages in terms of the adoption of these technologies, so there’s a long way to go.

AI has potential application across various sectors, e.g, healthcare, retail, semiconductors, etc.
However, when evaluating use of AI, business leaders have to keep in mind that this is still a very fast-evolving set of techniques and technologies.

There are confines that are purely technical. Typical questions when moving from, for example, a historically database-based system are:
1. Can we actually explain what the new AI/ML algorithm is doing?
2. Can we interpret why it’s making the selections and the consequences and forecasts that it’s making?
3. If the results are not much different, then why should I change?

And there are some real-world limitations as well. A lot of data is needed for the AI/ML algorithms to train and the data needs to be labeled. But is the data actually available? And is it labeled?

If data exists, there are still questions about:
• How clear are the algorithms?
• Is there any bias in the data?
• Is there any prejudice in the way the data was collected?

Though we have this idea of machine learning, we first have to train humans to collect and train the data and algorithms. Now companies have to find this talent pool of people who are able to do it.

And, companies have to keep an open mind. Like every new technology adaptation, things will eventually be easier and more evolved. Most companies have a lot of data, which is mostly wasted, so it’s very important to invest in prepping and cleansing the data so it can be made useful.

If you cannot hire new talent, train the current talent. The concept of AI is very human understandable. We do learn by observing the patterns in our surroundings and AI is training machines to learn from the data we feed them and reach conclusions in a way similar to a human. That means it can take examples and learn from them to increase the accurateness of its future conclusions.

I think business leaders should just start to understand the technology and what’s possible. Think of how AI works as the same way you learn what a dog looks like by looking at photographs of labradors, poodles, and pit bulls. Then later, when you’re shown a picture of a doberman, being able to identify that as another breed of dog.

Now try and understand what the implications of that type of learning could have on your organization. Consider real estate: by having the machine look at past house prices and house descriptions—it can predict the price of any house if it has the house data.

The machine can reach a new, accurate, conclusion based on what it has learned in the past.

AI/ML is widely applicable. So, understand where it can help you get value.

Anupama Joshi
Anupama Joshi is the Senior Engineering Manager at Reddit, managing the search and discovery efforts from ingestion to results and infra to ranking. Expert in managing cross-disciplined teams, her organization is focused on the infrastructure and development of core ranking algorithms and ranking signals to optimize the quality of search results.


Legacy Code & Infrastructure

I think AI is extremely important for most companies right now to automate existing processes, create new innovative ones, and increase the efficiency of existing processes across the board. While the tools and talent for incorporating AI in business exists, there are some hurdles along the way.

A lot of companies have legacy code and infrastructure that are not easy to build AI into, and require a lot of investment from the business. AI algorithms are often built on top of a data layer and having easy access to reliable, structured data can be difficult. There are some difficulties around finding the right talent, picking the right tools and seeing and communicating results from AI in a reasonable amount of time across the organizations.

I think most of the difficulties around implementing AI in business stems from the difference between AI development vs classic software development.

Successful AI implementation and integration require a shift in mindset, strategy and clear communications between business and the machine learning or data science teams. In most cases, integrating AI in an organization requires an investment from the business. This is due to AI’s nature.

AI is an iterative process and in most cases immediate results are not possible before building the right underlying infrastructure and rounds of iterations and refining the input data, features, modeling, and inference.

I usually advise the AI teams to focus on building the correct data layer, infrastructure, and the simplest model, and focus on iterations involving more advanced modeling and featurization techniques afterwards. This requires clear communications with the organization and setting realistic goals and timelines.

I think organizations should define long-terms goals and vision around AI and shift away from quick wins or “fixes” mindset. Organizations can create clear goals around AI, invest in this iterative process and enable their data scientists and engineers to build what is necessary for successful implementation of AI!

Kamelia Aryafar
Kamelia Aryafar, Ph.D., is the Chief Algorithms Officer at Overstock.com, leading the company’s ML, data science, data engineering and analytics functions across the marketing, customer, sourcing, and website verticals. Since joining Overstock.com in 2017, her teams have integrated ML and AI algorithms across various product teams, including personalization, pricing, ranking, search, recommender systems, marketing, CRM, advertising technologies, email, sourcing, and supply chain.


Human Talent & Vision Are Key

I’m one who thrives on envisioning and architecting how data, artificial intelligence, and technology can make our world a better, easier place to live. The reality is that AI systems are really hard to implement. AI is still in its infancy. Just because an AI system won against a human at a game, it doesn’t mean that it can be used in your business to drive immediate outcomes.

Building AI systems for an organization needs vision (a leader with a unique combination of business and technical strength), expertise (talent), and data (domain specific and in massive quantity)—none of these are easily available in most organizations.

Unless an organization is ready to make the investments necessary to get the above three factors in place, they will find it very hard to succeed in building and implementing AI systems.

Beena Ammanath

Beena Ammanath, is the founder and CEO of nonprofit, Humans For AI Inc. and is an award-winning senior digital transformation leader with extensive global experience in Artificial Intelligence, big data, and IoT. Her knowledge spans across e-commerce, financial, marketing, telecom, retail, software products, services and industrial domains with companies such as Hewlett Packard Enterprise, GE, Thomson Reuters, British Telecom, Bank of America, e*trade and a number of Silicon Valley startups.


 

Want to Learn More?

Join us at Activate, the search and AI conference, where you can hear from these experts and more than 75 others, Oct 16-18, in Montreal.

The post AI Is Hot! So Why Are Companies Slow to Warm to It? appeared first on Lucidworks.

The Truth Behind ‘Rigging’ Search

$
0
0

A recent tweet by Donald Trump stated that “Google search results for ‘Trump News’ shows only the viewing/reporting of Fake News Media…” which proves that Google has “rigged search.”

Here’s the full tweet:
“Google search results for “Trump News” shows only the viewing/reporting of Fake News Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal? 96% of results on “Trump News” are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

So the question is, is “rigging” even possible? And the next questions are, so, how easy is it to do? and finally, should something be done about it?

First, let’s talk about searching in general. I got into search about 15 years ago when I co-founded Lucidworks, the sponsoring company behind the world’s most popular open source search engine, Apache Solr.

Without the likes of Google, and on the retail side, Amazon, there’s no doubt the internet would not be anywhere near what it is today. We can find anything in just minutes. At the same time, people’s expectations of search are increasing. We want the results we want—which can change if we are at home in North Carolina, or in our corporate offices in San Francisco. Or it can change by time of day, or our work life to our home life.

What we really want are results that are relevant to our life. Which gets us to “rigging.”

Weighting Search to Provide Richer Experiences

As a search professional—I work to tune search to learn from our users so that we can get the perfect pieces of content in front of them. We do so by looking at various “signals” such as geo spatial data, time of day, and past search history.

For example, if you go to Google and search on green furniture, there are about 2 billion results. But if the engine knows you are into sustainable, it should push furniture that is eco-friendly to the top. And if the engine sees that you are in downtown San Francisco, it should map out stores in that area.

I guess to some degree some might call this “rigging,” although we call it “tuning” or “weighting.” The idea though, is that the machine aggregates knowledge from where the preponderance of people clicked for credible information on a given subject. It then ties in your behavior.

This weighting is what makes your search experience better.

How Easy Is It to Weight Search?

Is weighting easy to do? At the simplest level, of course it is. It’s just software and most search engines support editorial rules. At a practical level, it’s often a waste of time, not to mention a slippery slope that turns off users.

At the more complex level, companies like Google and mine make it seem easy—because this is what we concentrate on all day long.

We spend a lot of time analyzing and creating mathematical algorithms that help us get better in predicting user intent.

We are continuously testing and adding in new signals.

We learn when you give up on searching, or if you have to modify your search query, because you didn’t get the results you wanted.

But Mr. Trump’s dissatisfaction shows that just because something is popular does not mean it will be perfect for everyone. It also shows the fickle nature of the internet in a nutshell: being popular today doesn’t guarantee you will be popular tomorrow, whether you’re a politician or a tiny company just fighting to be heard.

The post The Truth Behind ‘Rigging’ Search appeared first on Lucidworks.

Site Search Adds Three New Languages

$
0
0

Bom dia!

Goedemiddag!

God dag!

Lucidworks Site Search, our embeddable, easy-to-configure, out-of-the-box site search solution that runs anywhere has added three new languages.

Site Search Now Supports 10 Languages

In addition to the six new languages we announced last month, Site Search can now index and provide interface elements in Portuguese, Dutch, and Swedish. This includes all of the embeddable widgets of Site Search – all with language auto-detection and results that will prioritize the user’s language by default.

Instant Actions Let You Resolve Issues Inline

We’ve also added instant actions on the reporting side so you can quickly set up synonyms and and manage promotion right from the report view.

Topics for Segmenting Search Results

Site Search has Topics so you can index multiple sites and data sources and then present a different experience to different audiences. So embedded elements on your main site can surface results from your entire network of sites and then have the search on your support site show only knowledge base articles and tutorials – and then a separate search just for your press site that serves up just press releases and coverage.

Site Search in Five Minutes

Lucidworks Site Search is built for teams who need powerful and fast site search for their public websites and don’t have the time to get mired in a lengthy procurement and development process. Site Search gives you the smart content extraction with industry-leading, field-tested AI and content algorithms to connect your site visitors with what they’re looking for quickly.

Learn more about Lucidworks Site Search and start your trial today at lucidworks.com/site-search.

The post Site Search Adds Three New Languages appeared first on Lucidworks.

Boost Your Search With Machine Learning and ‘Learning to Rank’

$
0
0

Most companies know the value of a smooth user experience on their website. But what about for their onsite search? Simply shoving Ye Olde Search Box in the upper right corner doesn’t cut it anymore. And having bad search could mean bad news for your online presence:

  • 79% of people who don’t like what they find will jump ship and search for another site (Google).
  • 15% of brands dedicate resources to optimize their site search experience (Econsultancy).
  • 30% of visitors want to use a website’s search function – and when they do, they are twice as likely to convert (Moz).

This expands even further to the search applications inside an organization like enterprise search, research portals, and knowledge management systems. Many teams focus a lot of resources on getting the user experience right: the user interactions and the the color palette. But what about the quality of the search results themselves?

Automate Iterations With Machine Learning

Smart search teams iterate their algorithms so relevancy and ranking is continuously refined and improved. But what if you could automate this process with machine learning? There are many methods and techniques that developers turn to as they continuously pursue the best relevance and ranking.

There are several approaches and methodologies to refining this art. One popular approach is called Learning-to-Rank or LTR.

Machine Learning
Image from Catarina Moreira’s machine learning course at University of Lisbon

 

LTR is a powerful machine learning technique that uses supervised machine learning to train the model to find “relative order.” “Supervised” in this case means having humans manually tune the results for each query in the training data set and using that data sample to teach the system to reorder a new set of results.

Popular search engines have started bringing this functionality into their feature sets so developers can put this powerful algorithm to work on their search and discovery application deployments.

With this year’s Activate debuting an increased focus on search and AI and related machine learning technologies, there are two sessions focused specifically on using LTR with Apache Solr deployments. To help you get the most out of these two sessions, we’ve put together a primer on LTR so you and your colleagues show up in Montreal ready to learn.

But first some background.

How LTR Differs From Other ML Techniques

Traditional ML solutions are focused on predicting or finding a specific instance or event and coming up with a binary yes/no flag for making decisions or a numeric score. Think of use cases like fraud detection, email spam filtering, or anomaly identification. It’s either flagged or it’s not.

LTR goes beyond just focusing on one item to examining and ranking a set of items for optimal relevance. With LTR there is scoring involved for the items in the result set, but the final ordering and ranking is more important than the actual numerical scoring of individual items.

How LTR Knows How to Rank Things

The LTR approach requires a model or example of how items should be ideally ranked. This is often a set of results that have been manually curated by subject matter experts (again, supervised learning). This relies on well-labeled training data, and of course, human experts.

The ideal set of ranked data is called “ground truth” and becomes the data set that the system “trains” on to learn how best to rank automatically. This method is ideal for precise academic or scientific data.

A second way to create an ideal set of training data is to aggregate user behavior like likes, clicks, and view or other signals. This is a far more scalable and efficient approach.

LTR With Apache Solr

With version 6.4, Apache Solr introduced LTR as part of its libraries and API-level building blocks. But, the reference documentation might only make sense to a seasoned search engineer.

Solr’s LTR component does not actually do the training on any models — it is left to your team to build a model training pipeline from scratch. Plus, figuring out how all these bits and pieces come together to form an end-to-end LTR solution isn’t straightforward if you haven’t done it before.

So let’s turn to the experts.

Live Case Study: Bloomberg

Financial information services giant Bloomberg runs one of the largest Solr deployments on the planet and is always looking for ways to increase and optimize relevancy while maintaining split-second query response times to millions of financial professionals and investors.

In their quest to continuously improve result ranking and the user experience, Bloomberg turned to LTR and literally developed, built, tested, and committed the LTR component that sits inside the Solr codebase.

Those engineers from Bloomberg will be onstage at the Activate conference in Montreal this October to talk about LTR. They’ll discuss their architecture and challenges in scaling and how they developed a plugin that made Apache Solr the first open source search engine that can perform LTR operations out of the box.

You’ll hear the full war story of how Bloomberg’s real-time, low-latency news search engine was trained on LTR and how your team can do it, too – along with the many ways not to do it.

Full details on this session at Activate 2018 Learning to Rank: From Theory to Production

Activate search and AI conference banner

Live Demo: Practical End-to-End Learning to Rank Using Fusion

Also at Activate 2018, Lucidworks Senior Data Engineer Andy Liu will be presenting a three part demonstration on how to set up, configure, and train a simple LTR model using both Fusion and Solr.

Liu will demonstrate how to include more complex features and show improvement in model accuracy in an iterative workflow that is typical in data science. Particular emphasis will be given to best practices around utilizing time-sensitive user-generated signals.

The session will also explore some of the tradeoffs between engineering and data science, as well as Solr querying/indexing strategies (sidecar indexes, payloads) to effectively deploy a model that is both production-grade and accurate.

Full details on this session at Activate 2018 Practical End-to-End Learning to Rank Using Fusion

So that’s a brief overview of LTR in the abstract and then where to see it action with a real world case study and a practical demo of implementing it yourself. Here’s even more reading to make sure you show up in Montreal ready to get the most out these sessions:

More LTR Resources

Bloomberg’s behind the scenes look at how they developed the LTR plugin and brought it into the Apache Solr codebase

Our ebook Learning to Rank with Lucidworks Fusion on the basics of the LTR approach and how to access its power with our Fusion platform. Accompanying webinar.

An intuitive explanation of Learning to Rank by Google Engineer Nikhil Dandekar that details several popular LTR approaches including RankNet, LambdaRank, and LambdaMART

Pointwise vs. Pairwise vs. Listwise Learning to Rank also by Dandekar

A real-world example of Learning to Rank for Flight Itinerary by Skyscanner app engineer Neil Lathia

Learning to Rank 101 by Pere Urbon-Bayes, another intro/overview of LTR including how to implement the approach in Elasticsearch

= = =

Want to Learn More? Join us at Activate, the search and AI conference, where you can hear from these experts and more than 75 others, Oct 15-18, in Montreal.

Activate the search and AI conference

The post Boost Your Search With Machine Learning and ‘Learning to Rank’ appeared first on Lucidworks.

How Wayfair Improves Customer UX Using Search/AI

$
0
0

If you’re in ecommerce, you take inspiration from the leaders. One of the talks we’re really excited about at this year’s Activate conference is with Suyash Sonawane and John Castillo from online retailer Wayfair. As one of the largest online destinations for home, Wayfair knows a thing or two about what its customers shopping onsite need and what they expect. With two million users daily, and more than 10 million products in its catalog, the scale of their operations covers, “a zillion things for the home.”

woman sitting on couch

Their session, “Using Opinion Mining and Sentiment Analysis to Discover Hidden Product Features for E-Commerce Search” explores how to discover insights in the things your customers say.

When you’re in ecommerce, it’s paramount that you’re able to describe products in the written word. Describing physical dimensions or the material of a product can only do so much—often times, the best words to describe an item are those used by your very own customers. Enter stage left: Product reviews.

Senior Engineers Suyash Sonawane and John Castillo ingested a myriad of customer signals and processed millions of product reviews in order to extract useful information. They processed this data using Natural Language Processing. By combining these techniques, they developed insights about what customers say about different products.

In their talk, Sonawane and Castillo detail how Wayfair’s Search Tech team has looked beyond the product catalog to improve search relevance onsite and influence customer experience at scale.

customer product review

Thinking Ahead

Imagine the possibilities from this. Being able to augment your product descriptions and search terms with what customers actually say, for instance “this barbecue grill is leak-free” yields an important characteristic.

Customers also tell you when things are going wrong as well as alternatives, “I’ve been ordering this for a long time but recently the quality has gone done down so I switched to SuperX brand.”

These sorts of insights go beyond analytics and allow you to optimize keywords, make alternative recommendations, detect when something has gone wrong and even how to resolve common problems. This is a type of customer signal that with the right technique you can mine to optimize your conversion rate and your relationship with your customers!

In other words, you can Activate your AI and Search capabilities like Wayfair.

Next Steps

The post How Wayfair Improves Customer UX Using Search/AI appeared first on Lucidworks.


Deep Learning Question-Answering System

$
0
0

You’ve probably used a Question Answering (QA) system. Most of them are just a FAQ turned into a horrible search interface. If you don’t answer the exact question they answered, don’t bother. Other QA systems are basically just keyword search that let you put in questions.

So what is a proper question answering system? The answer seems obvious, “it is a system that answers your questions.” But to do it properly it needs to recognize synonyms, close enough answers and other aspects of the meanings of questions specifically and language generally.

question answering system

In their talk, “Enriching Solr with Deep Learning for a Question Answering System” at this year’s Activate conference, Lucidworks data scientists Savva Kolbachev and Sanket Shahane will show you a powerful question answering system that they constructed by adding deep learning to Solr. They’ll both show how to produce more accurate answers as well as how to use Solr to scale the approach given the weights of deep learning models.

question answering system

Their talk will cover technique as well as the more technical mathematical and statistical details and include a demo. Additionally they’ll detail highlighting using sentiment analysis.

sentiment analysis

If you’re trying to create an Information Retrieval system such as a QA system, or even if you’re just really interested in deep learning, you’re definitely not going to want to miss this talk. See you in Montreal next week!

Next Steps

The post Deep Learning Question-Answering System appeared first on Lucidworks.

Activate Conference 2018 Wrap-Up: The Future of Search and AI

$
0
0

Practitioners in the search, Apache Solr, and AI communities from 39 countries converged last week at the Activate conference in Montreal to discuss the challenges and opportunities in their fields today. A multitude of trainings and talks on subjects spanning the spectrum were presented, from use case stories about how companies like Slack and Reddit scale their search, to using Apache Solr to power a 3D printed open source robot. And, undoubtedly, during breaks and social hours, one-on-one conversations sparked new ideas between experts and newcomers as knowledge was shared. Here’s a brief wrap-up of some of Activate’s highlights and themes this year:

The revolution is over

In his opening remarks, Lucidworks CEO Will Hayes explained the rebranding of the conference from “Lucene/Solr Revolution” to “Activate” saying, “We can successfully declare that the revolution has been won. Open source isn’t just a mainstay in enterprise technology stacks – it has become the standard for how innovative products and solutions are brought to market.” The additional focus on AI this year was a natural evolution of the conference as search activates AI. In fact, search has been the instrument for the most widespread use of AI for the past 20 years.

Where is search headed now?

Here’s just a small selection of innovative search uses demonstrated in conference talks :

  • A Q&A system created with deep learning algorithms to provide a natural language interface between questions and the answers. Such as “Can my wife drive on my insurance?,” and the response “The answer is yes, unless your husband has coverage on a separate auto insurance policy.”
  • Red Hat’s robust customer portal built with Lucidworks Fusion and Solr that has dramatically reduced support costs and improved customer satisfaction as customers prefer the ability to self-solve issues at their convenience.
  • Search is driving intelligence around cybersecurity and real-time threat detection.
  • Query intent understanding and relevancy improvements are being made using contextually-driven semantic search.
  • Learning to Rank is being leveraged for type-ahead and many additional powerful ways.

Search provides the most natural interface between consumer and enterprise by capturing each person’s intent and desires. With the customer experience bar continuously being raised by companies like Google and Amazon, customer expectations are high.

Hadoop Big Data Spark on Google Trends

This could explain why big data, Apache Hadoop, and data lakes are reducing in popularity and effectiveness. Meanwhile, interest in Apache Spark, Solr, and their combined powers in Lucidworks Fusion continue to rise as they provide actionable insights and a means to improve customer satisfaction.

Someday soon typing into a search box won’t be necessary as AI will predict a user’s desire before it’s expressly stated. Of course, this progress requires an extensive amount of work for people in the search and AI fields, as well as people in correlated fields who will enable efficiency in its adoption. As Lucidworks SVP of Engineering Trey Grainger said during Activate’s closing keynote, “We need search to be multi-disciplinary, taking the best from devops, cloud ops, text search, personalization and recommendations systems, data science, and business domain experts, to enable us to activate our data and maximize the impact.”

Above all, the conference echoed that the future of search and AI is not artificial, it has to be centered around humans: human experiences, human advancement, and evolutions in the way we communicate. The fields of search and AI are turning away from a short-sighted data centered approach and toward the humans at its core. This begins at conferences like Activate, where people on the cutting edge of this technology challenge each other to improve.

The future of search and AI starts now

Session recordings and slides from Activate 2018 will be available in the coming weeks on YouTube and SlideShare. Follow us on Twitter to stay up-to-date on all conference materials and dates and location for 2019.

The post Activate Conference 2018 Wrap-Up: The Future of Search and AI appeared first on Lucidworks.

Activate Conference Provides an Homage to Search—With AI Twist

$
0
0

The search and AI conference Activate 2018 was held last week in the birthplace of web search — Montreal. In 1989, a postgraduate student and systems administrator at McGill University named Alan Emtage architected the first web search engine, Archie. The first version exposed a simple UI and indexed FTP archives that would be searched on the search server with grep.

Figure A

 

Nearly 30 years later, search has come a long way. At Activate 2018, Lucidworks showed the Lucene-Solr world new Artificial Intelligence (AI) and user interfaces for managing information retrieval in an ongoing manner.

Full disclosure: I did not attend the Lucidworks conference as an employee or customer of Lucidworks — but as a software engineer who wanted to learn search from the experts. I saw the list of presenters and they looked eerily similar to many of the Lucene-Solr core contributors. Activate turned out to be chock full of some of the most influential architects and developers in search, machine learning, and natural language processing.

Experts Offer Practical Advice … & Caveats

The conference featured real practitioners — like Josh Wills, who leads search at Slack and previously worked at Google — that models are not one-size-fits-all, and sometimes, they go stale. On a panel moderated by Grant Ingersoll, Lucidworks founder and CTO, Wills recalled a time while at Google where ads powered by machine learning started to slowly present themselves less frequently. He joked that their ad system had become “self-aware” and that the ad delivery application discovered “that it didn’t particularly like ads itself.”

He and other panelists agreed that reliance on AI can backfire when you build a system that is overly complicated, or when it leverages a model that has become obsolete.

Infer Query Intent

At Activate, we dove deep into how the latest version of Fusion allows users to enrich their data with built-in automation jobs like Logistical Regression Analysis. During the AI class, I kicked off a logistical regression classification job on a public eCommerce data set and learned that more than 20% of the queries that were classified as “Computers” were also classified as “Accessories.” This is where Fusion Server’s Phrase Extraction stood out to me.

New in Fusion 4.1, you can run a job to extract phrases from your dataset that might otherwise direct users to the wrong place. Take, for example, the query “red iPad case.” A search engine could go in a lot of directions with that query (as with most), as the results might take someone shopping for “Accessories” to “Computers.”

If iPad is boosted, there’s an even greater chance the results do not get a user to where they are trying to go. With the help of the Phrase Extraction job, and the ~ for fuzzy searching, Fusion helps you pluck out the signals that more accurately direct searchers to the information they are looking for when they search. “iPad Case”, “Case iPad,” or any combination of words that includes both “iPad” and “case” should return iPad cases first. Not iPads.

Eliminate User Errors on Phones and Tablets

Now that tablets and phones are ubiquitous, search latency’s influence on user retention has become even more important. Users on touch screens are much more error prone, with mobile device users likely to submit queries with 3 incorrect characters because the buttons are smaller.12

Mistakes though mean more processing time, or worse, slower and less helpful results. Despite this, mobile users want their search results now. Yet traditional search engines, including Solr, really struggle to handle search with an edit distance greater than 2.

In simplified terms, the edit distance equals the number of mistakes or typos in a query and it is based upon the Damerau-Levenshtein distance. Each time an original query requires an insertion, a deletion, a substitution, or a transportation of characters to form a user’s intended query, you add an increment to the edit distance.

Suppose a user looking for an iPad case were to search with “ipd xasse.” The edit distance to the target query would be 3. One for inserting “a” before “d,” one for substituting “x” for “c” in “xasse,” and a fourth increment of edit distance for deleting the extra “s” in the second word of the original query “casse.” Solr implements the Levenshtein algorithm for edit distance for all queries with an edit distance between 0 and 2, but for edit distances > 2, Solr gets slower.

Fusion however, can handle edit distances greater than that. Chao Han, VP of Data Science at Lucidworks, demonstrated how Fusion’s query parser can quickly handle some queries that may have been initially discovered using Fusion’s head/tail analysis.

Let’s say we found “ipd xasse” in that list of tail queries—the type of queries that do not drive a lot of user engagement. The engine powering Fusion’s head/tail analysis can suggest a rewrite, incorporate contextual information, and boost a tail to the head using the Token and Phrase Spell Correction job.

Fusion should properly rewrite “ipd xasse” for you. If it doesn’t, Fusion also provides a web interface for manually editing the Solr synonyms.txt file (Figure B). You can add the rewrite for “ipd xasse” as a synonym of “ipad case,” and subsequent instances of the query will drive users to relevant “Accessories.” Otherwise, the mobile user might go to another site to purchase an iPad case and more.

Figure B

 

No one should accept losing customers to competitors due to a slow mobile search experience!

Spinning Up Clusters—and Solutions

Attendees at this conference spent the week spinning up server clusters, brainstorming solutions to each others’ problems, and focusing on the details. It wasn’t a vendor self-promotion conference, though.

Regardless of your area of your organization, experience with Fusion, or your Solr knowledge, there were sessions for everyone with actionable information. Although I had been recently exploring the capabilities of Fusion through trial downloads and tutorials on the Lucidworks website, I was able to jump in the deep end upon arriving for instructor-led and TA-assisted training on Monday.

If you wanted to move fast, you could take the course materials and move at your own pace with the presentation as helpful background music. If you were new to Solr and Fusion, not especially technical, or needed to slow down, TAs (Lucidworks employees) were on-hand to help you over the hump. Everyone who attended leveled up.

Back in 1989 in Montreal, Alan Emtage’s pioneering move to bring search, a capability well known to people familiar with the command line at that time, to a user interface on the web spawned an era in computing that has radically transformed our world forever.

Today, almost every site or app has some implementation of search. Search engines serve as the homepage of the web for most internet and intranet users. In the same city 29 years later, Activate 2018 brought AI to search. I cannot say what the confluence of AI and search will mean for the next 30 years, but what a time to be alive!

_____

1 In a study about password typos and secure correction, the IEEE found that keyboard proximity typos were proportionally higher on mobile. While Mobile OS’s tend to be good at token autocorrect, proximity typos, of course, are not caught every time in input fields. More studies are needed on the subject of mobile typos.

Chatterjee, Rahul, Anish Athayle, Devdatta Akhawe, Ari Juels, and Thomas Ristenpart. “PASSWORD TYPOS and How to Correct Them Securely.” PASSWORD TYPOS and How to Correct Them Securely – IEEE Conference Publication. August 18, 2016. Accessed October 29, 2018. https://ieeexplore.ieee.org/abstract/document/7546536.

“The problem may be exacerbated by various input device form factors, e.g., mobile phone touch keyboards.”

2Gordon, Whitson. “How to Prevent Those Annoyign Texting Typos.” Popular Science. April 05, 2018. Accessed October 29, 2018. https://www.popsci.com/prevent-texting-typos#page-2. “Phones have small screens, and we have big thumbs. This makes us inherently more prone to mistakes when we’re poking at a phone keyboard with our sausage fingers.”
_____

Marcus Eagan is a software engineer based in Palo Alto, California.

The post Activate Conference Provides an Homage to Search—With AI Twist appeared first on Lucidworks.

Activate Conference 2018 Photo Album

$
0
0

Did you attend Activate, the Search and AI Conference, October 15-18, 2018 in Montreal? Enjoy a few photo highlights of the learning and festivities that took place, and check back as we’ll add additional images soon. 

The post Activate Conference 2018 Photo Album appeared first on Lucidworks.

Increase Your Revenue to Profit Ratio

$
0
0

Retail margins are in jeopardy. Between mobile shopping, online retail, and the popularity of social media coupled with a decade of flat sector profitability, retailers must change how they fundamentally operate or join the ever-growing graveyard of bygone brands. Even a stalwart brand like Sears has failed to make this transition and has announced its likely demise.

However, there is a brighter future for retailers who combine smart, focused business strategies, cost controls, as well as data and AI technologies. From Michael Kors to Home Depot, smart retailers are combining data technologies, artificial intelligence, and search to drive better decisions, improve margins, and increase sales.

How can you perform like a leader?

“Being agile enough to compete isn’t a one-time exercise that happens by just cutting costs. Success comes from reinvesting those savings in activities that will drive competitive advantage”

Cut Costs But Don’t Stop There

According to Accenture, “being agile enough to compete isn’t a one-time exercise that happens by just cutting costs. Success comes from reinvesting those savings in activities that will drive competitive advantage and revenue growth, such as creating a more efficient operating model, embedding enterprise wide process excellence or building leading edge capabilities.”

Top performing retailers don’t cost-cut their way to profitability. Failing retailers like Sears/Kmart have tried that for years. Cutting your way into profitability is rare and if you get there you’re never a leader. Once a company is profitable, the right cuts can propel it into a leadership position.

Making the right cuts means taking a holistic view of the organization and the customer experience. It means looking at the retail outlet, distribution center, and opportunities for automation from factory to point of sale

Master Dynamic Pricing and Price Testing

Matching competitors’ prices seems intuitive, especially online, but isn’t always the wisest approach. Formulating the right pricing strategy is difficult. Setting the right price requires recording and understanding consumer signals and broad experimentation. These days dynamic pricing strategies often must be done per SKU and in some cases per customer.

According to McKinsey, retailers should consider dynamic pricing strategies and “conduct a pilot in a handful of categories for concept design and testing. Done right, the pilot—and the subsequent rollout of dynamic pricing across all product categories—will yield meaningful improvements in revenue, profit, and customer price perception.”

Key to any pricing strategy is testing whether it works and if it influences customer buying decisions positively or negatively. Ethically, per-customer pricing should be done very carefully to avoid including any kind of demography that might be linked to race, gender, or other sensitive parameters. Pricing should be based more strictly on customer behavior signals similar to how Orbitz and other sites have implemented it.

Price testing should be implemented in combination with search A-B testing. Sometimes boosting more expensive (or lower cost) brands will also yield stronger results. As in software and hardware development, the strongest results will be the retailers who run the best tests.

“Retailers who deploy a bunch of next generation social shopping are often times making up for poor competitiveness in other areas and have seen a 17% decrease in sales and a 36% decrease in share price.” – Accenture

Avoiding Digital Window Dressing

According to Accenture, digital window dressing is “any digital capability that is a ‘nice to have’ but does not make up for a lack of competitiveness in core areas such as price, assortment, customer service, etc.” Retailers who cut costs and move to digital marketplaces because competitors are doing it tend to underperform.

Brands are selling wares on sites like Facebook and Pinterest (or selling through them) in greater numbers. But retailers who deploy a bunch of these next-generation social shopping tactics are often making up for poor competitiveness in other areas and have seen a 17% decrease in sales and a 36% decrease in share price.

“Industry frontrunners are competitive because of differentiators enabled by digital investments that span their business, from competitive pricing and hassle-free delivery to broad selection and shopping made simple.” – Accenture

The right digital investments and partnerships can increase sales and profitability. For omnichannel retailers, those investments should focus on enhancing the in-store experience and connecting with consumer signal data that includes the online and mobile experience as well.

According to Accenture, “industry frontrunners are competitive because of differentiators enabled by digital investments that span their business, from competitive pricing and hassle-free delivery to broad selection and shopping made simple.” In other words, the investment shouldn’t end with customer experience. Back office logistics are critical and the retailers that use AI and other technologies to drive operational efficiency while eliminating silos will become more competitive.

“When it comes to profitability, an online sale is not an equivalent replacement for that same item purchased in store.”

Rethink Your Omnichannel and Distribution Strategies

According to the retail consulting firm AlixPartners, “when it comes to profitability, an online sale is not an equivalent replacement for that same item purchased in store.” Some retailers have an idea that online sales are automatically cheaper or that order online and pickup in the store is the lowest cost option to the retailer. By some models this may not prove true when all of the costs are added up.

Research has shown that online customers spend less on average than in-store customers and that distance from the store is the main factor on whether someone shops online. Encouraging online shoppers to come into the store increases profits. Encouraging in-store shoppers to go online decreases profits.

The best omnichannel strategy focuses on making sure customers have a seamless, personalized experience and gives them a reason to come into the store if possible. This means, among other things, providing excellent search and recommendations. The best omnichannel strategy also takes into account the supply chain as well as other costs while measuring profitability all along the way.

AI Helps Industry Leaders to Specialize and Focus

Some of the more general department stores that never made the full omnichannel transition, like KMart/Sears and JC Penney are failing while retailers that focus on one section of the market or on one general area are profiting. Look at success stories like Home Depot, TJ Maxx/Marshalls, and Best Buy. They each specialize in a specific market segment allowing them to cater to and delight their customers.

It is no accident that specialization allows for better AI recommendations and customer targeting. Specialization also gives brands greater control over supply chains, inventory, and merchandising. This focus tightens up everything from cost to how items are displayed, searched, and recommended.

Your best customers are the ones that come to your store or your site first and only go elsewhere if you can’t help them. It is in a retailer’s best interest to be everything they want you to be.

‘Signals’ Help You Focus on Your Best Customers

Especially with online retail, specializing and focusing means thinking about not all of your customers but your “best” customers. Customers who drop-in once to take advantage of a low price aren’t your best customers. Your best customers are the ones that come to your store or your site first and only go elsewhere if you can’t help them. It is in a retailer’s best interest to be everything they want you to be.

Know these customers. There are tools and technologies that allow you to capture customer signals to better understand their behavior. Signals are customer behavior data that help you recognize and focus on these customers and recommend things to them in the store, and across mobile, web, and other channels. These signals can even tell you when these customers are not as satisfied as they have been in the past so you can develop a plan to incentivize them to come back.

Leaders Do Everything Necessary

One trick of the trade isn’t enough to lead the industry in profitability. It is a combination of focus, rethinking and cutting costs, and deploying smart technology that allows you to cater to your best customers and delight them. By combining a set of strategies and the appropriate well thought out technologies, leading retailers can profit even in uncertain times.

Next Steps

The post Increase Your Revenue to Profit Ratio appeared first on Lucidworks.

Lucidworks Fusion Recognized as a Leader in Gartner’s MQ on Insight Engines

$
0
0

Lucidworks was recently recognized as a leader in the Gartner Magic Quadrant in their Insight Engines category. I wanted to write down what this means for you — and why it’s so important to your organization.

For those unfamiliar, Gartner is the gold standard of analysts with over 2,000 research experts covering various computing and IT trends and vendor categories. As such, they wield enormous influence with IT buyers in the largest companies in the world.

Insight Engines is the term Gartner uses for search that goes beyond mere keywords. An Insight Engine is search that uses AI and advanced algorithmic techniques to deliver more relevant personalized results to customers and employees alike. In other words, Gartner awards this term to search infrastructure that provides real insights. A modern search engine.

Being Named a Leader

So. What does all this mean? Three big things are top of mind to me:

1. The industry agrees with our vision. We believe that search is the single best way to activate AI in the enterprise. We want to enable people to maximize every single digital moment at work or at play. Whether they are looking for an esoteric topic to help them complete a project, or looking through a camping website for a perfect cold weather tent, are the technology that helps people make connections to topics, insights, and experts at the exact moment when they can best use it.

2. Our customers believe in our vision. We launched Fusion four years ago by taking the most robust and reliable search technology in the world, Apache Solr, and fusing it with the popular cluster-computing framework Apache Spark. On top of that foundation we added essential enterprise-friendly AI and operational features to the stack so that some of the most influential organizations in the world could rely on our platform to solve their toughest problems.

Since that time, more than 400 of the largest organizations in the world have given us the honor of running their most important workloads on our Fusion platform. These include top 5 global banks, retailers, and energy companies.

3. We’re primed and ready for much more. As you know, search is not just a box to find things, search technology is everywhere you look and on every screen you see. Search has gone way beyond just 10 blue links. AI requires vision and a human-focus to be more than hype.

Over the next 18-24 months, we will accelerate the delivery capabilities for our customers, and humanize AI. This means more user friendly apps for casual users, power analysts, and service and support agents.

It also means more deployment friendly tools for system administrators and DevOps professionals, all containerized and available to use. All of these tools and apps must be worry free on any combination of private and public cloud infrastructures.

Search might be 30 years old, but don’t confuse 30-year old technology with the search of today.

We’re thrilled about what’s to come. Your trust in our products and team continues to keep us motivated to keep innovating with you, as together we bring the power of our platform to more use cases where we can effectively humanize AI. Together we can make the most complex technology easy and accessible for anyone, where they can best make use of it — through each digital moment in their lives.

Next Steps

The post Lucidworks Fusion Recognized as a Leader in Gartner’s MQ on Insight Engines appeared first on Lucidworks.

How to Highlight Search Terms Using Query Workbench

$
0
0

In this “From The Field” series, we’ll explore the Query Workbench within Fusion Server and walk through helpful tips and tricks on making the most of your search results. This post discusses how to quickly (in less than five minutes) highlight search terms within search results and explore other available highlighting features. Let’s start the timer:

5 minute timer

What Is Highlighting?

When users are presented with search results, they often see snippets of information related to their search. Highlighting reveals the keywords inside those snippets of results so the user can visually see the occurrences. This functionality enhances the user experience and usability of search results.

Basic Highlighting

To get started, we’re going to use a previously built Fusion App that performed a website crawl of lucidworks.com. After logging in to Fusion, selecting our app, and opening the Query Workbench from the Querying menu, we’ll be presented with the crawled documents.

open query workbench in Lucidworks Fusion

The highlighting features are driven by Solr query parameters, through the Additional Query Parameters stage. Open the Add a Stage dropdown menu and select Additional Query Parameters to add the stage to the Query Pipeline. (Click here for Query Pipelines documentation).

additional query parameters in Lucidworks Fusion

On the Additional Query Parameters stage, name the stage by adding a label, such as “Highlighting.” We’ll begin by adding the two required Solr parameters (hl and hl.fl):

Additional Query Parameters stage Lucidworks Fusion

We give the hl parameter a value of true to enable the highlighting, and the hl.fl (field list) parameter a wildcard value of * to match all fields where highlighting is possible. In production, you will want to explicitly define the fields to match. Click Save to apply the changes. Hint: You can click the Cancel button to close out the stage panel.

By default, the Query Workbench does not display highlighted results. To enable display of highlighted results, open the Format Results options at the bottom and check the Display highlighting? option. Click Save to apply the change.

Display Highlighting? option Lucidworks Fusion

Now let’s test a query to see the highlighting in action. In our query field, we’ll perform a search for data:

query field search data Lucidworks Fusion

We can now see matches from the query being highlighted, as well as the fields which contain the matches. The actual highlighted fragments as seen under the result in the Query Workbench belong to the highlighting section of the response header. To view the response, click on URI tab and copy/paste the Working URI into a new browser tab:

Query Pipeline API response Lucidworks Fusion

This Query Pipeline API response provides a highlighting section for each document with the matching snippets per field:

{
"debug": {
...
},
"response": {
...
},
"responseHeader": {
...
},
"highlighting": {
"http://lucidworks.com/darkdata/": {
"twitter_title_t": [
"Lucidworks | Dark <em>Data</em>"
],
"twitter_description_t": [
"What you know about your <em>data</em> is only the tip of the iceberg. #darkdata @Lucidworks"
],
"og_title_t": [
"Lucidworks: The <em>Data</em> that Lies Beneath"
],
"title_t": [
"Lucidworks: The <em>Data</em> that Lies Beneath"
],
"og_description_t": [
"Dark <em>Data</em> is Power."
],
"body_t": [
"00.100 THE <em>DATA</em> THAT LIES BENEATH What you know about your <em>data</em> is only the tip of the iceberg"
]
},
"https://lucidworks.com/2018/06/25/big-data-failing-pharma/": {
"twitter_title_t": [
"Big <em>Data</em> is Failing Pharma"
],
"og_title_t": [
"Big <em>Data</em> is Failing Pharma"
],
"title_t": [
"Big <em>Data</em> is Failing Pharma"
],
"og_url_t": [
"https://lucidworks.com/2018/06/25/big-<em>data</em>-failing-pharma/"
],
"body_t": [
" machine learning, and artificial intelligence. Learn more › Quickly create bespoke <em>data</em> applications for"
],
"article_section_t": [
"Big <em>Data</em>"
]
},
...
},
"facet_counts": {
...
}
}

Using a tool such as Fusion App Studio, highlighting will be parsed and displayed automatically on the front-end UI. For custom UI integrations, the Query Pipeline API’s response with highlighting information can be easily parsed for presentation.

Additional Highlighting Parameters

Up to this point, we’ve only looked at enabling highlighting and using default parameters to demonstrate core functionality. However, when deploying in production, we may be more selective with the fields that require highlighting, the tag to use before and after a highlighted term and choosing a specific highlighter based on our needs.

When choosing a highlighter, be conscious of index costs to store additional highlighting features. For example, besides the stored value, terms and positions (where the highlighted terms begin and end), the FastVector Highlighter also requires full term vector options on the field. Therefore, the speed of the search may affect execution time performance. See the Solr Highlighters section below for more information.

Snippets

By default, only one snippet is returned per field. The parameter hl.snippets controls the number of snippets that will be generated. For example, the default value of 1 returns the following:

snippet

When this value is increased to 3, additional snippets within the body_t will be highlighted:

snippet

Pre/Post Tags

Most commonly, an HTML tag will be used pre and post the highlighted term for the presentation layer. By default, the HTML tag used for pre is <em> and for post is </em>. In addition, depending on the chosen highlighter, the parameter will either be hl.tag. (Original Highlighter) or hl.simple. . Any string can be used for the respective pre or post parameters.

For example, if we wanted to change to a <strong> HTML tag, we configure the following parameters:

parameters

Note that the parameter value for an HTML tag must be escaped.

This would generate the following result:

snippet

The highlighting section of the Query Pipeline API response would also reflect this change:

...
"highlighting": {
"http://lucidworks.com/darkdata/": {
"twitter_title_t": [
"Lucidworks | Dark <strong>Data</strong>"
],
...

Solr Highlighters

Solr features different highlighters such as the original or default highlighter, the unified (new as of Solr 6.4) and FastVector. Each one has tradeoffs between accuracy and speed. Depending on your workload and needs, you may want to evaluate each one to see the performance based on searches for items such as terms, phrases and wildcards.

For a complete guide on choosing an appropriate highlighter, see the Fusion documentation.

Summary

Lucidworks Fusion provides a comprehensive workbench to configure and test highlighting of search terms within search results.

For further uses and configuration parameters, see the Fusion documentation.

The post How to Highlight Search Terms Using Query Workbench appeared first on Lucidworks.


What Is Natural Language Search?

$
0
0

Natural language search is the insane idea that maybe we can talk to computers in the same way we talk to people. Absolutely nuts, I know.

With the increasing popularity of virtual assistants like Siri and Alexa, and devices like Google Home and Apple’s Homepod, natural language search is ready for prime time in the devices in our homes, our offices, and in our pockets.

Alexa, Siri, and Google Home are Search Apps

All these devices and virtual assistants making their way into our homes and hearts have search technology at their core. Any time you query a system or database or application and the system has to decide which results to display – or say – it’s a search application. OpenTable, Tinder, Google Maps are all search-based applications. Search technology is at the core of nearly every popular software application you use today at work, at home, at play, at your desk, or on your smartphone.

But how you interact with these systems is changing.

The Annoyance of Search

In the old days, if you were searching a database or set of files or documents for a particular word or phrase, you’d have to learn a whole arcane set of commands and operators.

Example of Boolean operators that are used by databases and applications all over the world, diagram courtesy of Slipper Rock University.

Boolean in a Nutshell

You’d have to know Boolean operators and other logic so you could search this table WHERE this word AND that word OR that other word appear but NOT this other word and then SORT by this field. (Got it?)

Each system had its own idiosyncrasies that only experienced users would know and you might have to run queries and reports multiple times to make sure you got the right results back – or to make sure you got all the results possible. You’d have to know the structure of the database or data set you’re querying and which fields to look at.

These requirements for using search systems put barriers to entry for people wanting to find information to do their jobs at work or trying to do research at a library. You’d have to ask a specialist who knew the ins and outs of each system and wait for them to run the report or query for you and print out the results (and hoped they answered the question you originally had).

Evolution of Natural Language Search

You couldn’t just say out loud what you wanted to know and then have it delivered to you instantly.

But what if you could? That’s natural language search.

“What was last year’s recognized revenue from the AIPAC region?”

“What restaurant was it Mary mentioned last week in text message?”

“Who hosted the highest rated Oscars telecast ever and what year was it?”

And the system takes that request whether spoken or typed into a box, takes it apart, figures out what you’re looking for, what you aren’t, where you’re searching, and what to include and turns it into the query that it can submit to a database or search system in order to return the results right back to you.

The Technology Behind Natural Language Search

On the backend of things there’s several bits of technology at play.

Let’s say you ask your favorite nearby listening smart device:

What band is Joe Perry in?

First the device wakes up and records an audio file. The audio file gets sent across the internet and is received by the search system.

The audio file is processed by a speech-to-text API that filters out background noise, analyzes it to find the various phonemes, matches it up to words and converts the spoken word into a plain English sentence.

This query gets examined by the search system. The search system notices there’s a proper name in two words in the sentence: Joe Perry. Picking out these various people, places, and things from a data set, collection of files, or group of text is called named entity recognition and is a pretty standard feature for most search applications.

So the system knows that the words Joe Perry refers to a person. But there might be several notable Joe Perrys in the database so the system has to resolve these ambiguities.

Probably not the Joe Perry you’re looking for.

There’s Joe Perry the NFL football player, Joe Perry the snooker champion (totally not kidding), Joe Perry the Maine politician, and Joe Perry the popular musician. The word band in the query alerts the system that we’re probably looking for careers associated with a band like a composers, musicians, or singers. That’s how it disambiguates which Joe Perry we’re looking for.

A database of semantic information about musicians might have information about Perry’s songs, career, and yes the bands he’s been a part of during his career.

The system takes apart the sentence, sees the user is asking for a band associated with Joe Perry. It looks at a database of musicians, performers, songs, albums, and bands. It sees that Joe Perry is semantically associated with several bands but the main one is Aerosmith.

The query has been spoken, converted to text, turned into a query, sent to the system and it comes back with the answer that Joe Perry is in a band called Aerosmith along with related metadata that he’s been in Aerosmith since it formed in 1970. The system puts this answer together into a sentence.

It ships that sentence off to text-to-speech API to piece together the words into a sentence that sounds like a human being. Sends that audio file back to the device and it answers back:

Joe Perry has been in the band Aerosmith since 1970.

The question is asked just like you’d ask a human being — and answered in exactly the same way (and correctly).

Natural language search reduces the barriers to information and access to enhance our lives during work or play or when trying to settle a bar bet over a piece of pop culture. When users can talk to devices just like they talk their friends, more people can get more value out of the applications and services we build.

Top image from 1986’s Star Trek IV: The Voyage Home where Scotty tries to talk to a 1980s computer through its mouse (video).

The post What Is Natural Language Search? appeared first on Lucidworks.

How to Make SharePoint WAY Better

$
0
0

SharePoint is just a document management system so why are there long threads on Reddit devoted to its inadequacy? Why do your users hate it so much? Can you make it better? Here are a few tips to improve the SharePoint experience.

They Can’t Find What They Need

The number one problem users have is that they can’t find what they need. You can tune SharePoint’s search to do an adequate job for a lot of content. However, SharePoint’s search (originally called Fast) was initially developed over two decades ago and hasn’t seen major architectural improvements since it laid off half of its staff, got involved in a stock scandal, and had board members call each others liars and wonder if they should have shot each other.

Besides all of that drama, many companies create separate SharePoint domains for every minor department or subdivision of the company. Moreover, some of what you need for context in order to find things in SharePoint, may not be in SharePoint at all. In other words, to do search you need a system designed for modern search. You also need to consider using security groups rather than having separate SharePoint domains, and to think carefully about how to organize your documents into libraries.

It Is Too Slow

Obviously, you need adequate numbers of disks and CPU for your SharePoint servers.

  • Use groups instead of individual user permissions as this slows down the indexing process (individual permissions are also a pain to manage).
  • Remove BLOBs from SQL Server and use local storage.
  • Make sure you have proper disk resources assigned to both SharePoint and the backing SQL Server.

There are a number of other performance tuning opportunities from the front-end to how memory and disk are utilized. Make use of all of them.

They Think It’s Ugly

Out of the box, SharePoint isn’t the prettiest thing. However, SharePoint is highly customizable, especially the web layer. Brand it with your corporate look and feel — or a more attractive variant of it. Using HTML and CSS, you can change the look and feel with minimum hardcore technical effort.

They Can’t Share Things

You can improve your users’ ease of sharing if you:

But most of all make sure your users are educated on SharePoint’s full capabilities. Many organizations have closet Dropbox usage when in reality SharePoint can share files externally and securely.

In the end, SharePoint is a powerful tool for organizing internal content, controlling access, and sharing content externally — if used and configured correctly. SharePoint’s search capabilities aren’t great, but can be tuned. You will be better off using dedicated search software to provide more relevant, personalized, and context-sensitive results.

Next Steps

The post How to Make SharePoint WAY Better appeared first on Lucidworks.

Retail Site Search: Shoppers Still Can’t Find What They’re Looking For

$
0
0

In one of U2’s best-known songs, Bono sings dolefully, “I still haven’t found what I’m looking for.” He’s not alone. On many retail websites, shoppers longing to find an item with ease and precision also end up feeling unfulfilled.

In one survey of U.S. shoppers, a solid majority (60%) reported being frustrated with irrelevant search results. In another, almost half (47%) of online shoppers complained that “it takes too long” to find what they want, while 41% have difficulty finding “the exact product” they are looking for.

Consumers’ frustration with poor shopping site search is no small matter. The fact that shoppers can search independently for products online, as opposed to waiting for a retailer to present them, is at the heart of the ongoing transformation of retailing.

60% of shoppers reported being frustrated with irrelevant search results

Recommendations Create the Shopper Journey

Analysts at Deloitte Consulting identified the “trend for consumers to take their own lead in the shopping journey” in a report called The New Digital Divide. “A significant number of consumers want to manage the journey themselves, directing the ways and times in which they engage retailers rather than following a path prescribed by retailers or marketers.”

Precise, personalized, and speedy search—and the companion function of product recommendations—is becoming a key differentiator in ecommerce. Even Amazon, where 47% of shoppers already start their product searches, is working on the next-generation search experience.

Amazon’s new service called Scout is based on products’ visual attributes, reports CNBC. “It is perfect for shoppers who face two common dilemmas: ‘I don’t know what I want, but I’ll know it when I see it,’ and ‘I know what I want, but I don’t know what it’s called,’” said a statement Amazon sent to CNBC.

As leading-edge ecommerce sites like Amazon train consumers to expect better search, shoppers will have even less tolerance for a mediocre search experience.

‘What’s New?’ Is Still the Leading Question

Once customers have made a purchase, retailers have the knowledge to use to entice them back. A simple and effective way to do that is with automation; Amazon’s Subscribe & Save program, for example, provides a discount to shoppers who sign up for regularly-scheduled automatic delivery of certain items.

However, because consumers overwhelmingly want to see what’s new when they interact with a retailer, site search and personalized recommendations provide ecommerce with the greatest opportunity to capture new shoppers or to introduce existing customers to additional products or categories.

In fact, 69% of consumers responding to a Salesforce and Publicis.Sapient survey reported that it is “important” or “very important” to see new merchandise every time they visit a physical store or shopping site, and three-quarters of shoppers are using new site search queries online each month.

This explains why more than half (59%) of the top 5% of best-selling products on e-commerce sites change every month, according to the report. “That means retailers and brands can’t sleep on analyzing shopper searches and delivering the ever-changing items they seek in real time.”

Machine Learning: Know Your Customer

E-retailers are increasingly using Artificial Intelligence (AI), specifically Machine Learning (ML) and Natural Language Processing (NLP), to help shoppers discover what they want, perhaps before they know themselves.

Engagement pays off: 6% of e-commerce visits that include engagement with AI-powered recommendations drive 37% of revenue

AI-driven personalized recommendations can also provide a big payoff for retailers. A survey by Salesforce and Publicis.Sapient found that “6% of e-commerce visits that include engagement with AI-powered recommendations [drive] an outsized 37% of revenue.”

“The best way to understand your customers’ needs is to actually track and listen to your consumer,” said Lasya Marla, Director of Product Management at Lucidworks. “You do this by tracking customer signals, what they click on, what they ignore, what they call things. Recording and analyzing signals is crucial to learning your customers’ likes and dislikes and their intent.”

Merchandising Expertise Still Key

While machine learning can automatically suggest products and help customers discover items they wouldn’t have otherwise, many brands, particularly lifestyle brands, are loathe to risk merchandising with machines while they have experts on hand.

According to Peter Curran, president of Cirrus10, an onsite search system integrator in Seattle, “we work with brands that want ML to eliminate the drudgery of search curation—synonyms, boosts, redirects, and keywording—but who still want to finesse the customer experience.

“The dance between brand and brand aficionado is filled with nuance that merchandisers tend to notice—and IT departments tend to miss. The role of the merchandiser is ready for transformation,” Curran continued. “Feature selection, entity extraction, embeddings, and similar concepts are currently the job of the data scientist, but that work can’t be done well without the cooperation of the business user. We need tools that allow business users and data scientists to cooperate on improved models and always allow business users to override automation.”

Next Steps for Brands and Retailers

For ecommerce retailers and brands thinking about upgrading and modernizing their search functionality, it’s critical to develop a strategy that is integrated with the organization’s long-term goals. There are many aspects to consider during the selection of new technology for site search, but these questions can help in the process:

  • How can we develop better algorithms and techniques to match keywords to products?
  • What do we need to automatically fix search keywords based on misspellings, word order, synonyms, and other types of common mismatches?
  • How can we enable our marketing and merchandising people to take control of search so that it supports the business, including promotions, inventory, and seasons?
  • What tools do we need to analyze search trends at both an individual and macro level so that we can adjust in real time?
  • What are the signals customers are sending—and how can we best capture them?

While it is true that looking for a new pair of boots or a specialized metalworking tool does not rank up there with the search for a soulmate that U2’s Bono sings about, the desire to find a certain item in an online store has an emotional component that is intimately connected to the shopper’s perception of that brand.

Ironically, technology can produce search results and recommendations that are so personalized that they enhance this emotional connection, giving the consumer the sense that the brand “knows me.” The choice of a platform for site search, then, will make visitors fall more deeply in love with a retail brand—or send them elsewhere to find what they are looking for.

Marie Griffin is a New Jersey-based writer who covers retail for numerous B2B magazines.

The post Retail Site Search: Shoppers Still Can’t Find What They’re Looking For appeared first on Lucidworks.

Six Reasons to Switch From Endeca Today

$
0
0

When you deployed Endeca, it was the best ecommerce search on the market. With state of the art relevance, faceted search, and customer experience tools it drove search for most of the large ecommerce sites. But that was then. With years since a major refresh, Endeca fails to meet customer expectations today.

Here are six reasons to ditch Endeca and switch to Lucidworks Fusion today:

1. AI-powered UX Converts More Browsers…Into Buyers

In the age of Amazon and Google, customers don’t expect to learn your site; they expect it to learn them. Leverage customer Signals and AI-powered search to determine intent and recommend products that meets your customers wants.

AI-powered recommendations result in higher average order size — leading to greater transactions and revenue.

“We’ve seen dramatic bumps in conversion rates and, overall, some of those key success metrics for transactional revenue within an order of magnitude of a 50% increase since the migration.”

– Marc Desormeau, Senior Manager, Digital Customer Experience, Lenovo

2. Head-n-Tail Analysis

Customers don’t always describe things in the same terms that your site does. An Artificial Intelligence technique called “Head-n-Tail Analysis” automatically fixes search keywords based on misspellings, word order, synonyms and other types of common mismatches.

3. Data & Expertise Protection

Your customer insights are what drive algorithms. Don’t be fooled into using blackbox products that take your expertise to refine their algorithms — taking away your data, your control — and your merchandising knowledge!

Your algorithms should be just that. Yours.

4. Stats-Based Predictions

Rules are great, but they shouldn’t be a work horse. They can turn into a maintenance nightmare. Instead use AI-powered search to rely on stats and reduce rules.

Save rules to boost seasonal items, new trends — or whenever your merchandise expertise says you should.

5. Powerful Analytics

App Insights allows your analysts to look at customers at both a personal and statistical level. You will also be able to see what works and what doesn’t using A/B testing and experiments.

6. Faster Indexing

You can’t sell what you can’t serve, so faster indexing lets you be more agile in your merchandising.

Further, make ancillary business data from ERP and supply chain systems readily available to drive search results and recommendations.

Learn More

The post Six Reasons to Switch From Endeca Today appeared first on Lucidworks.

Physicians Improve Patient Treatment With AllMedx and Lucidworks Fusion

$
0
0

For 10 years, Doug Grose, Chief Executive Officer of AllMedx, envisioned a “Google for doctors.” With extensive experience in medical communications, Grose knew doctors were frustrated with conventional search engines like Google and Bing because results were often diluted with unreliable content that was intended for consumers and patients. He felt that physicians and other healthcare professionals could benefit from a search tool that sourced its content solely from MD-vetted articles, high-impact medical journals, and other select, reputable clinical sources. The goal was to eliminate irrelevant, consumer-type pieces that would be of little-to-no value to physicians looking for answers to clinical, point-of-care questions.

Once the AllMedx corpus was built, Fusion was used to index data sources including PubMed, CDC, FDA, the leading physician news sites, clinicaltrials.gov, rarediseases. org, NIH DailyMed, Merck Manuals Professional, and many more, including clinical guidelines from 230 medical societies and thousands of branded drug sites.

Learning From the User

AllMedx.com is customized to each user based on their interests, previous queries and behavior, and medical specialty. For example, a cardiologist searching for “valve defects” will first see articles in their query results that other cardiologists searching for “valve defects” previously found helpful. Cutting-edge algorithms in Fusion, using AI and ML, automate the content indexing and tailor search results so AllMedx.com is able to provide doctors with the answers they seek faster than other sites.

Unique Taxonomy

Fusion has been the muscle behind AllMedx’s search since the launch of the website in April 2018. The user-friendly Fusion platform allows the AllMedx internal development team to be hands-on. Chief Operating Officer and Editor- in-Chief Carol Nathan performs one-on-one user testing with physicians in all medical specialties on a regular basis and can easily tune query pipelines accordingly with this user feedback. With Fusion, the staff has time for real-time improvements, can index new content quickly, and can do a lot of the configuration from the admin panel without having to engage with engineers working on code-level development.

In a particularly unique and market-first application, AllMedx boiled down the medical field into a taxonomy of 12,000 disease states and applied this taxonomy to more than seven million documents across dozens of data sources on a platform called AllMedicineTM. AllMedicine is updated daily with links from 2,000 sources and has 10 to 20 times more content than other physician resources, all neatly organized in the way doctors think about patient care. With the large taxonomy and number of records, the processing power required was a big question, and the team was concerned that the process to properly index the data could take months. However, using Fusion, the team efficiently and elegantly indexes sources and applies their taxonomy on a regular basis, with a full index taking just a few hours each day.

For AllMedx, Fusion has been intuitive, efficient, and dependable. “We’ve never been down. It’s been very stable, 100% stable,” says Grose. “Our physician users are getting precisely the experience that we envisioned when building AllMedx.”

The site’s popularity and reputation is steadily increasing; AllMedx already has 125,000 physician users and hopes to reach their goal of 250,000 physician users by the end of the year. The AllMedx team plans to index up to 2,000 additional medical sites, which will allow them to serve each medical specialty with a robust variety of quick and easy-to-navigate resources. Based on physician user feedback, the AllMedx team is confident the site will increase access to critical, clinical point-of-care information that will help improve patient care.

Learn More

The post Physicians Improve Patient Treatment With AllMedx and Lucidworks Fusion appeared first on Lucidworks.

Viewing all 731 articles
Browse latest View live