If I had a dollar every time someone mentions to me “ChatGPT”, I would probably have a stable source of secondary income. And it’s not a bad thing at all. Having spoken to David Sweenor, author of Modern B2B Marketing: A Practitioner’s Guide to Marketing Excellence, on what he thought about Generative AI and AI in its practical application, there seems to be no clear lines of “For” or “Against”.
David’s definition for AI is beautifully minimalistic – “data, plus analytics, plus automation, that’s AI.” Through our conversation, it seems like AI could be the nudge marketers need to bring them from “just good”, to delivering excellence, especially in customer experience.
ChatGPT burst onto our everyday scene right around November, 20221, when OpenAI released its early demo of ChatGPT. ChatGPT now hosts around 100 million users , and generates about 1.8 billion visitors per month. Since then, global brands such as Microsoft, Slack, and even our favorite language app – Duolingo, have integrated ChatGPT into various areas of their customer experiences. Duolingo, for example, leveraged GPT-4 to create scenario-based learning with AI personas, and personalized feedback delivered in natural language.
A couple of every-day examples on how else ChatGPT can be used – a teacher running through English assignments, a traveler planning a visit to a new country (but too busy to plan a schedule), a content producer looking for inspiration, and of course, editors like us who need some ignition for an upcoming web publication.
We used both Ask AI and ChatGPT for this exercise. What started as inspiring, quickly became hilarious, and then boring. The tools provided initial ideas that were, well, somewhat unexpected, but as we ran the query a couple times more, the results became relatively static.
House Rules, or a Moral Compass?
Back in May, Canadian cognitive psychologist and computer scientist, Geoffrey Hinton, resigned from Google, and went on to speak to multiple global news networks on the lurking dangers of AI. Hinton dedicated his life to development of AI, and had been working with Google for a decade.
After resigning from his role, the Godfather of AI talked about the possible abuse of AI in the hands of ‘bad actors’, and while AI today demonstrates simple reasoning capabilities, it will not be long before it catches up. In terms of knowledge, AI far surpasses what an average person possesses.
Think The Borg in Star Trek, as millions of “average persons” contribute their knowledge to “teach” AI models across multitudes of domains. AI learns separately, but can co-opt their knowledge and technology simultaneously. (Sounds a lot like assimilation to me!)
Take that knowledge and apply that to online scamming rings. Cyber scammers have turned to AI to recreate voices of loved ones, and authorities to dupe the next victim into submission. McAfee reported “36% (of people who lost money) said they lost between $500 and $3,000, while 7% got taken for sums anywhere between $5,000 and $15,000.”
AI models learn from individuals who provide the information/data – whether willingly or not. Through recorded voice notes, apps listening to conversations for better ad targeting, or voice-based authentication, both individuals and organizations have to be conscious of how these collected data are stored and used.
However, is this even possible without intervention of “higher powers” to provide a directional nudge to where the boundaries lie?
Being a Parent is Hard Work
As Uncle Ben once said, “with great power comes great responsibility.” With all innovation, particularly around data intelligence, there are always rules and guardrails to prevent mismanagement, and to prevent the abuse of bad actors in its application.
While almost all governments have set in place regulations around internet usage, data privacy and access to information, commonplace AI usage has yet to reach a boiling point that demands a mandate.
With that being said, China has taken the first step to draw up AI regulations as the EU and US play catch-up. Amongst its considerations, generative AI was specifically called out, with additional regulations around the algorithms for AI learning models, and governance on the development and implementation of AI-driven solutions. As one of the world’s leading innovators, China may be setting the tone for how other nations can review and provide a regulatory framework for AI research and development.
However, frameworks on a national scale may not be sufficient. Think about internet usage, it’s similar to limiting iPad time for your toddlers, or supervising your teenagers’ internet browser history. Even as policies are in place to prevent misinformation, stem access to disturbing content, users can still find ways to do so, should they choose to.
As a leader, there will be mounting pressure to understand just how AI is being used for your business. Will you choose to limit screen time, supervise all content, or will you choose to take a harder stance in ensuring that your AI kid is learning only the “right things”?
Taking Baby Steps or a Leap of Faith
Most leaders we’ve spoken to so far have taken a positive view on how AI can transform and revolutionize how we create customer experiences, and how sales and marketing can benefit immensely from insights and efficiencies.
By automating manual, repetitive tasks with trained AI programs, marketers can re-invest their time and resources into more strategic tasks. As Thomas Been, CMO, Domino Lab said when asked if marketers have improved with the influx of data and AI solutions, “the best ones have improved, and to be polite, the worst ones have not moved.”
Businesses now have to make a decision on when and how they can apply AI technology to their advantage, but take measured steps to ensure that they maintain a humanistic and differentiated approach to their customer experiences.