A chatbot is really what the name suggests: a piece of software designed to be able to communicate in some basic sense with a human. In what feels like a brief period of time, chatbots have gone from hilariously-inept pieces of fun spouting utter nonsense (think back to the days of chatbots like Eliza) to sophisticated feature-rich programs with real-world value.
Because of this rapid improvement, it’s only sensible to wonder just where chatbots can take us. Is handling basic operations the best we can reasonably expect from them, or is there a chance that complex people-focused roles could be usefully automated? Might we see online communities held together by nothing more than algorithms?
Let’s consider whether chatbots are set to develop what it takes to deal with complex human interactions in a managerial role, or whether they’re likely to remain lacking.
Advancements in Natural Language Processing (NLP)
The chatbot rise can be primarily attributed to developments in natural language processing because it was the main thing holding back chatbot potential for a long time. Computers have not traditionally been any good at fathoming the nuances of human expressions, and all the features in the world are useless if you can’t figure out what a user actually wants.
In recent years, NLP has shot ahead, as evidenced by developments such as Apple’s Siri voice assistant or Amazon’s Alexa system. Cloud processing power and semantic networks make it possible to infer user intent in most instances, and it doesn’t hurt that the younger tech-savvy demographics are familiar with the persisting limitations of NLP and know to adjust their language to suit.
Through chatbots being better equipped to parse human language, and humans being better prepared to phrase more carefully, some remarkable things can be achieved through voice or text interfaces. You can set reminders, create emails, place orders, find answers to questions, and accomplish many other things besides.
What does an online community manager do?
Typically hired by brands, online community managers are essentially used as liaisons between them and the communities they seek to maintain across forums and social media channels. You could think of them as lower-level officers in small armies, charged with maintaining morale, gathering feedback, ensuring cohesion, and reporting back to command.
From a technical standpoint, a community manager’s work involves using any and all relevant digital channels to talk to existing members of the community and attract new members. They must keep track of brand developments, provide accurate information when required, and use whatever tools are at their disposal to assess overall sentiment towards the brand.
The community manager role is fairly new because there used to be many fewer online channels, with most of them not being considered particularly important. Now that companies have access to detailed analytics that can tell them exactly where their traffic and revenue are coming from, they’re better equipped to understand that every online comment (every piece of praise or criticism) can affect their overall success.
The rise of conversational commerce
A big reason why the titular question is worth asking is that conversational commerce has become a huge driver of revenue in the e-commerce world. In essence, conversational commerce is about migrating various aspects of the sales funnel to messaging apps and channels— fielding queries, providing information, and offering products.
And it isn’t just about the front end of the e-commerce spectrum. Shopify set a precedent a couple of years back with its acquisition of Kit CRM, applying the virtual assistant technology to its build-from-scratch eshop software and making it possible for store owners to forgo the typical admin panel, making changes and running marketing campaigns directly from messaging apps.
Since chatbot technology is serving as a bridge between stores and customers, and between stores and store owners, it’s possible to have an e-commerce business that produces sales and revenue using NLP technology at both ends of the operation.
The safety of human control
Despite the noted power of chatbot technology, and the promise it shows for the future, I have some reservations because of how it can be perceived when used too extensively. There’s a significant difference between using an obvious utility in a set channel to carry out some basic requests and being corralled by a brand bot that can chase you across the internet.
For the foreseeable future, there will need to be reachable human support, as people will need to know that they can talk to actual humans if they deem it necessary (whether because bots couldn’t understand their issues, or they simply want to be able to vent to living things).
Also, there are dangers associated with every level of NLP sophistication:
- A chatbot that can’t imitate human behavior very accurately will be open to manipulation and mockery from users, possibly turning the responsible brand into a joke.
- A chatbot that can imitate human behavior accurately but has a subdued manner will never be particularly effective at forging valuable connections.
- A chatbot that can approximate strong emotion will be a huge PR risk— tone is tricky, and if a customer is upset by something a chatbot says, the brand will get the blame.
Think about the concerns people have about self-driving cars. They may recognize that they are statistically safer, but they instinctively feel more secure knowing that there is a fellow human behind the wheel, and they have a clear culprit to blame if something goes wrong.
The same is true in the threatening waters of social media. Hire someone to run your social media, and you can fire and disown them publically if they make a joke that goes down poorly. It’s something we’ve seen time and time again, particularly through platforms like Twitter that tend to see innocuous comments snowball into major incidents.
You don’t get that protective buffer with chatbots. It’s one thing if you’ve programmed every possible thing they can say, but when AI sophistication reaches a certain level, you won’t have that kind of granular control. The more complex a chatbot gets, the more of its functionality will be hidden from view.
Automated power with manual oversight
I don’t know if society will ever be ready for complete hands-off automation, even for something as minor as community management. Attempting is likely to lose as much time in fielding misinterpreted requests and corresponding complaints as it would gain through computational brute force and efficiency— and cause major reputation damage in the process.
What we likely will see instead is a new standard of sophisticated chatbots being overseen by human managers tasked with monitoring their performance, tweaking their settings, and determining where best to deploy them (and where best to step in themselves).
That way, you get most of the advantages (such as reduced workforce costs, improved response times, and easy information sharing) while avoiding many of the disadvantages (such as unclear human accountability, unchecked power to improvise, and misunderstood queries).
So, could chatbots be the community managers of the future? Well, the answer is somewhat complicated. Not by themselves, I don’t think— but used alongside (and guided by) human input, they could be powerful tools for taking a lot of the arduous work out of brand marketing and reputation management.
Author Info: Patrick Foster is a writer and e-commerce expert from E-commerce Tips — an industry-leading e-commerce blog dedicated to sharing business and entrepreneurial insights from the sector. Check out the latest news on Twitter @myecommercetips.