Targeting AI

Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.

Listen on:

  • Apple Podcasts
  • Podbean App
  • Spotify
  • Amazon Music
  • TuneIn + Alexa
  • iHeartRadio
  • PlayerFM
  • Samsung
  • Podchaser
  • BoomPlay

Episodes

Tuesday Oct 10, 2023

Customer experience chatbots that not only fail to deliver but also fall short of their human counterparts are the bane of CX designers' vision of an automated future.
Now, the arrival of generative AI technology is promising to correct dysfunctional chatbots' missteps, ease the burden on overworked and underappreciated human customer service agents and satisfy frustrated consumers.
But CX expert Don Fluckinger, a veteran tech journalist who has also worked as a CX industry analyst, casts a skeptical eye on claims made on behalf of generative AI and takes a cautionary view of automation and chatbots themselves.
"Losing jobs is never all right," Fluckinger said on TechTarget News' Targeting AI podcast. "But would it be OK for generative AI to more effectively answer customer questions so that humans could monitor what it's doing and not spewing out deceptive or wrong information? That would be good."
Many call centers already have AI-powered interactive voice response (IVR) systems, Fluckinger noted.
And yet, these don't work all that well.
"I've seen demos of these at conferences, on exhibition floors. I've read about them, but I have never run into it in real life yet," Fluckinger said. "The IVRs I hit are always pretty dumb."
Meanwhile, better IVR systems could be on the horizon, and generative AI could help.
Fluckinger noted, though, that while better call center and other CX platforms infused with generative AI technology are coming, they have to be tested and integrated with current systems.
And, finally, companies have to buy the new technology. But the industry isn't there yet.
Note: At the time this podcast was recorded, Fluckinger was a CX analyst for TechTarget's Enterprise Strategy Group. He now covers digital experience systems, end-user computing and the CPU/GPU market for TechTarget Editorial's news unit.
Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.
 

Monday Sep 25, 2023

Oyster is keeping its distance from the generative AI craze, at least for now.
When the vendor, whose platform helps companies with hiring, paying and managing employees in 180 countries around the world, recently came out with a new chatbot, Pearl, it fueled it with basic conversational AI, not the generative variety.
That's largely because Oyster wanted to skirt generative AI's by now well-known risks of outputting inaccurate and biased information, said Michael McCormick, senior vice president of product and engineering at Oyster, on this week's episode of TechTarget Editorial's Targeting AI podcast.
The vendor is a certified B Corporation with a mandate to focus on social and environmental performance.
"One of the big problems with generative AI that everyone knows about is the tendency it can hallucinate," McCormick said. "We've seen examples of people resting control away from the intent of the generative AI programmers, and convincing the generative AI to do and say all sorts of awful things.
"And there is not enough data capturing the experience of underserved and underrepresented groups," he added. "And so there's a huge amount of risk if you try to have guidance from systems like that in the HR space."
Pearl is Oyster's first public foray into using AI to interact with users of its platform. Essentially, the chatbot answers, in conversational format, questions about hiring and remote employment regulations in a world of distributed work in dozens of far-flung countries.
The chatbot is trained on Oyster's wealth of static information about global HR policies, taxes and benefits. So essentially it functions as a private large language model, with Oyster employees serving as "humans in the loop" to ensure that Pearl gives simple, consistent and accurate advice, thus further minimizing generative AI risk.
"If you give an individual the ability to have a direct conversation with a generative AI, you give up control of what might happen," McCormick said. "And you're at the mercy of OpenAI or Bard or whomever in terms of how they try to control that."
Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.
 
 

Monday Sep 11, 2023

Much of the world became aware of generative AI and large language models with the release of Dall-E and ChatGPT last year, but Conversica CEO Jim Kaskade has known about the technology since 2019.
During a walk with a top AI executive at Google, Kaskade said he learned about a lot about where the tech giant was heading with generative AI technology.
Once he became CEO of the AI vendor specializing in digital assistants, he looked for ways to apply the technology in a way that was disruptive on the scale of earlier world-changing technologies.
Kaskade's company's brand of disruption is conversational AI and the generative AI-powered digital assistants that he sees as an automated workforce that will eventually ease the burden of much menial work now done by humans.
The application of LLMs in the form of OpenAI's ChatGPT and other similar systems has seen quick adoption worldwide compared to similarly disruptive technologies such as electricity, telephone communications and television, but not all organizations are comfortable with the technology.
That uneasiness is analogous with the discussion in recent years about public cloud versus private and hybrid cloud, Kaskade said.
"It's just a sequence of been there, done that," he said on Tech Target Editorial's Targeting AI podcast. "Once people get really comfortable with the amount of governance that's put around the public application [product], the public cloud solutions, then the big enterprises will start to move from private LLM to public LLM. It'll take the same period of time as it did with cloud."
The more comfortable companies and people are with AI technology, the more benefits they can gain from it.
"Look at what happened with the computer, the PC, look what happened with the phone, look what happened with the world wide web," Kaskade said. "AI is going to be more disruptive than any of those or all of them added together."
Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast series.
Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.
 

Monday Aug 28, 2023

AI technology has become a “turtles all the way" down problem.
It's a dilemma in which AI technology is created to solve a particular problem. But in order to test the first AI tool, the tester has to use another AI technology, and then a third, and so on.
According to Johna Till Johnson, CEO of advisory and IT consulting firm Nemertes Research, most enterprises try to avoid this problem by first providing proprietary data to the input of the first AI technology and testing the output, eliminating the need to have an AI tester test constantly.
"The problem is, as you expand your AI outside of private data, the outputs can vary much more wildly," Johnson said during an interview on the Targeting AI podcast from TechTarget News. "You still need some form of AI to test the outputs and then you need some form of AI to test the AI that's testing the outputs, and you get your turtles all the way down again."
Enterprises looking to get away from this endless feedback loop might need to stick with performing manual testing of the output of the initial AI technology, Johnson continued.
Moreover, enterprises must ensure that the data they input into the technology from the beginning is trustworthy, she said.
So using an AI tool like OpenAI's ChatGPT is not advisable.
"ChatGPT has been abused horribly," Johnson said, adding that if the tool is used at her small business, it will need to be checked by a human, a time-costly activity. "If you think about the best use of ChatGPT at the moment, it's writing really bad term papers."
Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.
Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast series.

Monday Aug 14, 2023

Whether AI is good and helpful or evil and dangerous is the stuff of endless debate in tech circles during this year's "generative AI moment."
In the movies, though, it's been pretty consistent: AI is the kind of malevolent force as embodied the HAL 9000 computer in the 1968 sci-fi classic 2001: A Space Odyssey.
But CX analyst Liz Miller of Constellation Research, who recently wrote a blog about AI and the movies and Salesforce, says AI should be seen as more like Meryl Streep's helpful assistant in the 2006 film The Devil Wears Prada.
Andy, the human assistant played by Anne Hathaway, whispers useful information about a prospective customer in the Streep character's ear -- and Miller thinks we should let AI technology do the same.
Indeed, it already is in some ways, in the form of digital assistants and generative AI-supported systems such as Microsoft's Copilot and Salesforce's various GPT tools.
"There's this fallacy that AI was going to take everything over, when in reality what AI needed to do was take over the stuff that we did not have the capacity to do in the time that we had to do it," Miller said on TechTarget Editorial's Targeting AI podcast.
"I think that's where we're starting to see AI take shape. And that's what I meant by that analogy," Miller added. "There's nothing wrong with HAL 9000. It's a great villain."
Meanwhile, beyond AI and the movies, Miller touches on other topics during the podcast, including the fast-moving saga of the X social media platform (formerly known as Twitter). For her, the AI story there is not about X itself but about what happens with mercurial X owner Elon Musk's nascent AI venture, xAI.
Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas. Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Together, they host the "Targeting AI" podcast series.
 

Monday Jul 31, 2023

Sam Abuelsamid thinks Tesla's driver assist technology is unsafe.
The mobility ecosystem analyst at Guidehouse Insights is a vocal critic of the electric vehicle giant's AI-powered "Autopilot" technology.
A former mechanical engineer, automotive journalist and Ford and General Motors employee, Abuelsamid also charges that the National Highway Traffic Safety Administration (NHTSA) has grossly undervalued safety considerations for self-driving and partially self-driving vehicles.
While Abuelsamid acknowledges that Tesla has advanced society's views on driving technology by appealing to consumers and popularizing electric vehicles, he also refuses to call such vehicles "autonomous." Instead, he refers to them as "automated," because, as he points out, few fully driverless vehicles are on the road.
In addition, Abuelsamid contends that Tesla has tried to do safety "on the cheap" by relying on cameras only to power Autopilot features and not using considerably more expensive sensor arrays.
"I think they've been utterly reckless and irresponsible in their approach to automated driving by putting experimental software in the hands of average consumers who are not trained in how to properly test and evaluate this kind of safety critical software," Abuelsamid says
Meanwhile, autonomous vehicle technology vendors including Cruise, Waymo, Zoox and Motional are using multiple types of sensors, he says.
One Tesla fan and investor, Ross Gerber, CEO of Gerber Kawasaki Wealth and Investment Management, has disputed Tesla safety critics. He argues that autonomously driven Teslas will get increasingly safer with hundreds of thousands of consumers driving and testing out the beta version of the popular carmaker's full self-driving capability.
But Abuelsamid faults NHTSA for failing to effectively oversee safety aspects of autonomous vehicle technology vendors.
"I think the National Highway Traffic Safety Administration has been negligent in not doing more to require sharing of data from these test vehicles to build an understanding of how these things function," he says. "At a minimum what we need is the electronic equivalent of what we have to do as humans to get a driver's license."
Go to TechTarget News for reports on autonomous vehicle technology and other AI developments.

Friday Jul 28, 2023

Our guest is Michael Bennett, director of education curriculum and business lead for responsible AI at the Institute for Experiential AI at Northeastern University. Bennett, a practicing lawyer, holds a law degree from Harvard Law School and a PhD from Rensselaer Polytechnic University in Philosophy -- Science, Technology and Society. Bennett is also an occasional TechTarget contributing writer.
During the 45-minute episode, Bennett discusses the impact of New York City's new Law 144 governing the use of AI in automated employment decision tools, which he helped draft before it went into effect on July 5, 2023. The local law is likely to have a wide-reaching effect on employers across the U.S. if only because a large number of corporations are based in or have a significant presence in the country's largest city, Bennett says.
The law prohibits "employers and employment agencies from using an automated employment decision tool unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates." Law 144 has already spun off a thriving new niche of law and audit firms providing services to employers to comply with the measure,
Bennett also zeroes in on the hottest topic in the tech world at the moment: generative AI. He talks about various efforts, including projects he's involved in, to rein in, regulate and harness for effective use large language models and the AI chatbots such as ChatGPT and Google Bard that have become ubiquitous in the business and consumer spheres over the last year.
On another front, AI and the arts, Bennett discusses the latest developments in copyright law as it relates to AI and also touches on the Hollywood TV writers strike and writers' concerns about generative AI systems taking over their jobs.
Podcast intro/outro music by Six Umbrellas: "Joker." This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Copyright 2023 All rights reserved.

Podcast Powered By Podbean

Version: 20240320