Targeting AI
Hosts Shaun Sutner, TechTarget News senior news director, and AI news writer Esther Ajao interview AI experts from the tech vendor, analyst and consultant community, academia and the arts as well as AI technology users from enterprises and advocates for data privacy and responsible use of AI. Topics are related to news events in the AI world but the episodes are intended to have a longer, more ”evergreen” run and they are in-depth and somewhat long form, aiming for 45 minutes to an hour in duration. The podcast will occasionally host guests from inside TechTarget and its Enterprise Strategy Group and Xtelligent divisions as well and also include some news-oriented episodes featuring Sutner and Ajao reviewing the news.
Episodes
Monday Aug 05, 2024
Monday Aug 05, 2024
Democratic presidential candidate Kamala Harris is a product of two decades of California politics who has longstanding ties to the tech and AI communities in her home state.
But in her role as President Joe Biden's vice president during the past four years, Harris was tasked with overseeing Biden's executive order on AI, with its emphasis on government regulation. And it was she who hosted leaders of tech giants at the White House last year and secured pledges from them to focus on AI safety.
In sharp contrast is the GOP presidential nominee, Donald Trump.
While Trump's running mate, Senator J.D. Vance (R-Ohio), has a background in tech venture capital, Trump himself has no tech experience but backs a largely hands-off approach to tech and AI companies.
In simple terms, Trump is anti-regulation, while Harris favors a moderate regulatory stance on big tech and the suddenly emergent generative AI sector, a view that roughly parallels that of Biden.
In this episode of the Targeting AI podcast from TechTarget Editorial, three commentators on the confluence of tech and AI and politics registered their analyses of the complex dynamics of the likely Harris-Trump faceoff.
Makenzie Holland, big tech and federal regulation senior news writer at TechTarget, emphasized that "there is a huge focus from the Biden-Harris administration on AI safety and trustworthiness."
Meanwhile, "we've obviously seen Trump attack the executive order," she noted.
For R "Ray" Wang, founder and CEO of Constellation Research, the choice for the tech industry is fairly clear.
"I stress the libertarian view because I think that's important to understand that tech doesn't necessarily want to be governed," Wang said.
The other guest on the podcast, Darrell West, a senior fellow in the Governance Studies program at the Brookings Institute, has authored a book about policy making in the AI era. He also pointed out the marked divergence of Harris and Trump on tech and AI issues.
"Even though she historically has been close to the tech sector, I actually think she will maintain Biden's tough line on a lot of issues because that's where the party is these days," West said. "And also that's where public opinion is on many tech issues."
Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.
Monday Jul 29, 2024
Monday Jul 29, 2024
For the past year, the Targeting AI podcast has explored a broad range of AI topics, none more than the fast-evolving and sometimes startling world of generative AI technology.
From the first guest, Michael Bennett, AI policy adviser at Northeastern University, the podcast has focused intently on the popularization of generative AI, while also touching on traditional AI.
While that first episode centered on the prospects of AI regulation, Bennett also spoke about some of the controversies then emerging in the nascent stages of generative AI.
"Organizations who have licenses to use and to sell photographers' works are pushing back,” Bennett said during the inaugural episode of the Targeting AI podcast.
While Bennett's point of view illuminated the regulatory and ethical dimensions of the explosively growing technology, Michael Stewart, a partner at Microsoft's venture firm M12, discussed the startup landscape.
With the rise of foundation model providers such as Anthropic, Cohere and OpenAI, generative AI startups for the last 12 months chose to partner with and be subsidized by cloud giants -- namely Microsoft, Google and AWS –-- instead of seeking to be acquired.
"This is a very ripe environment for startups that have a partnership mindset to work with the main tech companies,” Stewart said during the popular episode, which was downloaded more 1,000 times.
The early stages of generative AI were marked by accusations of data misuse, particularly from artists, writers and authors.
Our Targeting AI podcast hosts have also spoken to guests about data ownership and how large language models are affecting industries such as the music business.
The podcast also explored new regulatory frameworks like President Joe Biden's executive order on AI.
With some 27 guests from a diverse group of vendors and other organizations, the podcast took shape and laid the groundwork for a second year with plenty of new developments to explore.
Coming up soon are episodes on Democratic presidential candidate Kamala Harris’ stances on AI and big tech antitrust actions, election deepfakes and tech giant Oracle's foray into generative AI.
Listen to Targeting AI on Apple Podcasts, Spotify and all major podcast platforms, plus on TechTarget Editorial’s enterprise AI site.
Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, analytics and data management technologies. Together, they host the Targeting AI podcast series.
Monday Jul 15, 2024
Monday Jul 15, 2024
AWS is quietly building a generative AI ecosystem in which its customers can use many large language models from different vendors, or choose to employ the tech giant's own models, Q personal assistants, GenAI platforms and Trainium and Inferentia AI chips.
AWS says it has more than130,000 partners, and hundreds of thousands of AWS customers use AWS AI and machine learning services.
The tech giant provides not only the GenAI tools, but also the cloud infrastructure that undergirds GenAI deployment in enterprises.
"We believe that there's no one model that's going to meet all the customer use cases," said Rohan Karmarkar, managing director of partner solutions architecture at AWS, on the Targeting AI podcast from TechTarget Editorial. "And if the customers want to really unlock the value, they might use different models or a combination of different models for the same use case."
Customers find and deploy the LLMs on Amazon Bedrock, the tech giant's GenAI platform. The models are from leading GenAI vendors such as Anthropic, AI21 Labs, Cohere, Meta, Mistral and Stability AI, and also include models from AWS' Titan line.
Karmarkar said AWS differentiates itself from its hyperscaler competitors, which all have their own GenAI systems, with an array of tooling needed to implement GenAI applications as well as AI GPUs from AI hardware giant Nvidia and AWS' own custom silicon infrastructure.
AWS also prides itself on its security technology and GenAI competency system that pre-vets and validates partners' competencies in putting GenAI to work for enterprise applications.
The tech giant is also agnostic on the question of proprietary versus open source and open models, a big debate in the GenAI world at the moment.
"There's no one decision criteria. I don't think we are pushing one [model] over another," Karmarkar said. "We're seeing a lot of customers using Anthropic, the Claude 3 model, which has got some of the best performance out there in the industry."
"It's not an open source model, but we've also seen customers use Mistral and [Meta] Llama, which have much more openness," he added.
Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving
coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 35 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.
Monday Jul 01, 2024
Monday Jul 01, 2024
The biggest global retailer sees itself as a tech giant.
And with 25,000 engineers and its own software ecosystem, Walmart isn't waiting to see how GenAI technology will play out.
The company is already providing its employees -- referred to by the retailer as associates -- with in-house GenAI tools such as the My Assistant conversational chatbot.
Associates can use the consumer-grade ChatGPT-like tool to frame a press release, write out guiding principles for a project, or for whatever they want to accomplish.
"What we're finding is as we teach our business partners what is possible, they come up with an endless set of use cases," said David Glick, senior vice president of enterprise business services at Walmart, on the Targeting AI podcast from TechTarget Editorial.
Another point of emphasis for Walmart and GenAI is associate healthcare insurance claims.
Walmart built a summarization agent that has reduced the time it takes to process complicated claims from a day or two to an hour or two, Glick said.
An important area in which Glick is implementing GenAI technology is in payroll.
"What I consider our most sacrosanct duty is to pay our associates accurately and timely," he said.
Over the years, humans have monitored payroll. Now GenAI is helping them.
"We want to scale up AI for anomaly detection so that we're looking at where we see things that might be wrong," Glick said. "And how do we have someone investigate and follow up on that."
Meanwhile, as for the "build or buy" dilemma, Walmart tends to come down on the build side.
The company uses a variety of large language models and has built its own machine learning platform, Element, for them to sit atop.
"The nice thing about that is that we can have a team that's completely focused on what is the best set of LLMs to use," Glick said. "We're looking at every piece of the organization and figuring out how can we support it with generative AI."
Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. They co-host the Targeting AI podcast.
Monday Jun 17, 2024
Monday Jun 17, 2024
While Apple garnered wide attention for its recent embrace of generative AI for iPhones and Macs, rival end point device maker Lenovo already had a similar strategy in place.
The multinational consumer products vendor, based in China, is known for its ThinkPad line of laptops and for mobile phones made by its Motorola subsidiary.
But Lenovo also has for a few years been advancing a “pocket to cloud” approach to computing. That strategy now includes GenAI capabilities residing on smartphones, AI PCs and laptops and more powerful cloud processing power in Lenovo data centers and customers’ private clouds.
Since OpenAI’s ChatGPT large language model (LLM) disrupted the tech world in November 2022, GenAI systems have largely been cloud-based. Queries from edge devices run a GenAI prompt in the cloud, which returns the output to the user’s device.
Lenovo’s strategy -- somewhat like Apple’s new one -- is to flip that paradigm and locate GenAI processing at the edge, routing outbound prompts to the data center or private cloud when necessary.
The benefits include security, privacy, personalization and lower latency -- resulting in faster LLM responses and reducing the need for expensive compute, according to Lenovo.
“Running these workloads at edge, on device, I'm not taking potentially proprietary IP and pushing that up into the cloud and certainly not the public cloud,” said Tom Butler, executive director, worldwide communication commercial portfolio at Lenovo, on the Targeting AI podcast from TechTarget Editorial.
The edge devices that Lenovo talks about aren’t limited to the ones in your pocket and on your desk. They also include remote cameras and sensors in IoT AI applications such as monitoring manufacturing processes and facility security.
“You have to process this data where it's created,” said Charles Ferland, vice president, general manager of edge computing at Lenovo, on the podcast. “And that is running on edge devices that are deployed in a gas station, convenience store, hospital, clinics -- wherever you want.”
Meanwhile, Lenovo in recent months rolled out partnerships with some big players in GenAI including Nvidia and Qualcomm.
The vendor is also heavily invested in working with neural processing units, or NPUs, in edge devices and innovative cooling systems for AI servers in its data centers.
Shaun Sutner is a journalist with 35 years of experience, including 25 years as a reporter for daily newspapers. He is a senior news director for TechTarget Editorial's information management team, covering AI, analytics and data management technology. Esther Ajao is a TechTarget Editorial news writer covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.
Friday May 31, 2024
Friday May 31, 2024
The rise of generative AI has also brought renewed interest and growth in open source technology. But the question of open source is still "open" in generative AI.
Sometimes, the code is open -- other times, the training data and weights are open.
A leader in the open source large language model arena is Meta. However, despite the popularity of the social media's giant's Llama family of large language models (LLMs), some say Meta's LLMs are not fully open source.
One vendor that built on top of Llama is Lightning AI.
LightningAI is known for PyTorch Lightning, an open source Python library that provides a high level of support for PyTorch, a deep learning framework.
Lightning in March rolled out Thunder, a source-to-source compiler for PyTorch. Thunder speeds up training and serves generative AI (GenAI) models across multiple GPUs.
In April 2023, Lightning introduced Lit-Llama.
The vendor created the Lit-Llama model starting with code from NanoGPT (a small-scale GPT for text generation created by Andrej Karpathy, a co-founder of OpenAI and former director of AI at Tesla). Lit-Llama is a fully open implementation of Llama source code, according to Lightning.
Being able to create on top of Llama highlights the importance of "hackable" technology, Lightning AI CTO Luca Antiga said on the Targeting AI podcast from TechTarget Editorial.
"The moment it's hackable is the moment people can build on top of it," Antiga said.
However, mechanisms of open source are yet to be fully developed in GenAI technology, Antiga continued.
It's also unlikely that open source models will outperform proprietary models.
"Open source will tend to keep model size low and more and more capable, which is really enabling and really groundbreaking, and closed source will try to win out by scaling out, probably," Antiga said. "It's a very nice race."
Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.
Monday May 20, 2024
Monday May 20, 2024
In intellectual tech circles, a debate over artificial general intelligence and the AI future is raging.
Dan Faggella is in the middle of this highly charged discussion, arguing on various platforms that artificial general intelligence (AGI) will be here sooner than many people think, and it will likely take the place of human civilization.
"It is most likely, in my opinion, that should we have AGI, it won't follow too long from there that humanity would be attenuated. So, we would fade out," Faggella said on the Targeting AI podcast from TechTarget Editorial.
"The bigger question is how do we fade out? Is it friendly? Is it bad?" he said. "I don't think we'll have much control, by the way, but I think maybe we could try to make sure that we've got a nice way of bowing out."
In addition to his role as an AI thinker, Faggella is a podcaster and founder and CEO of AI research and publishing firm Emerj Artificial Intelligence Research.
In the podcast episode, Faggella touches on a wide range of subjects beyond the long-term AI future. He takes on election deepfakes (probably not as dangerous as feared, and the tech could also be used for good) and AI regulation (there should be the right amount of it), as well as robots and how generative AI models will soon become an integral part of daily life.
"The constant interactions with these machines will be a wildly divergent change in the human experience," Faggella said. "I do suspect absolutely, fully and completely that most of us will have some kind of agent that we're able to interact with all the time.
Meanwhile, Faggella has put forth a vision of what an AGI-spawned "worthy successor" to humans could look like in the AI future. He has written about the worthy successor as "an entity with more capability, intelligence, ability to survive and (subsequently) moral value than all of humanity."
On the podcast, he talked about a future inhabited by a post-human incarnation of AI.
"Keeping the torch of life alive would mean a post-human intelligence that could go populate galaxies, that could maybe escape into other dimensions, that could visit vastly different portions of space that we don't currently understand," he said.
Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Together, they host the Targeting AI podcast.
Monday May 06, 2024
Monday May 06, 2024
Salesforce was an early adopter of generative AI, seizing on large language model technology from OpenAI to integrate into its own applications.
But the CRM and CX giant quickly evolved an open model strategy. It now gives customers access to multiple third-party LLMs while providing its own AI trust layer to try to ensure that Salesforce users can safely rely on AI-generated outputs.
Jayesh Govindarajan, senior vice president at Salesforce AI, calls this approach "BYOLLLM," or bring your own LLLM.
"The Salesforce LLM strategy is to provide an open-model ecosystem for our customers," Govindarajan said on the Targeting AI podcast from TechTarget Editorial.
"Salesforce-developed models are, of course, available out of the box on the AI stack, but customers can also bring their own LLMs. And to support this level of choice and diversity, the trust layer is model-agnostic," he continued.
As befits its core customer base, Salesforce sees sales, marketing and customer service applications as most ripe for generative AI, and that is where the vendor is focusing on the technology as a productivity engine, Govindarajan said.
Similar conversations, whether taking place in email or other messaging formats, can be automated with generative AI so the technology is embedded in daily workflows.
An example Govindarajan cited is using generative AI to let a marketing person easily make a marketing campaign multilingual.
"How do we make a customer service person more efficient? How do we make a rock star salesperson 10 times more successful? How do we make a marketing manager create campaigns that convert really well?" Govindarajan said.
"It's not easy to do that. You want to do it with safety, security, and trust," he said. "As you know, the systems can go off. So, you want to have the right guardrails in place to be able to shape it into the right form."
Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. He is a veteran journalist with more than 30 years of news experience. Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems.
Monday Apr 22, 2024
Monday Apr 22, 2024
The explosive popularity of generative AI has been accompanied by the question of whether developers are finding great uses for the new technology.
While the hype around GenAI has grown, the perception of its usefulness for developers has changed.
"Developers are eager to kind of embrace AI more into their complex tasks, but not for every part, and they're not open to the same degree," GitHub researcher Eirini Kalliamvakou said on the Targeting AI podcast from TechTarget Editorial.
On Jan. 17, Kalliamvakou released new findings that showed the evolution of developers' expectations of and perspectives on AI tools.
For many developers, GenAI tools are like a second brain and serve mainly to reduce some of the cognitive burden they feel performing certain tasks. Cognitive burden in coding is produced by tasks that require more energy than developers would like to invest.
"They feel that it is not worth their time," Kalliamvakou said. "This is a sort of task that is ripe for automation."
Many developers are also using AI tools to quickly make sense of a lot of information and understand the context of what they need to do.
While many developers find AI tools helpful, others experience AI skepticism, she added.
Developers who are skeptical about AI had tried AI tools and were not satisfied.
"They felt the tools are not good enough," Kalliamvakou continued.
This is because the tools sometimes gave inaccurate responses and were not helpful.
"What they were saying was AI [tools] at the moment, they cannot be trusted, they cannot give ground truths," she said.
The two groups of developers are important to keep in mind for GitHub and other AI vendors creating tools that developers will use.
Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.
Monday Apr 08, 2024
Monday Apr 08, 2024
The growth of generative AI technology has led to concerns about the data AI technology companies use to train their systems.
Authors, journalists and now musicians have accused generative AI vendors of using copyrighted material to train large language models.
More than 200 musicians signed an open letter released Tuesday by the Artists Rights Alliance calling on AI developers to stop their "assault on human creativity."
While the artists argue that responsible use of generative AI technology could help the music industry, they also maintain that irresponsible use could threaten the livelihoods of many.
The problem is permissions, said Jenn Anderson-Miller, co-founder and CEO of music licensing firm Audiosocket, on the Targeting AI podcast from TechTarget Editorial.
"It's widely understood that a lot of these training models have trained on copyrighted material without the permission of the rights holders," Anderson-Miller said.
While it's true that the musicians did not produce evidence of how their works have been infringed on, generative AI vendors such as OpenAI have failed to prove that they didn't infringe on copyrighted works, she said.
For Anderson-Miller, one solution to the problem is creating a collaborative effort with musicians that would include licensing.
As a company that represents more than 3,000 artists, Audiosocket recently inserted an AI clause in its artist agreement.
In the clause, Audiosocket defined traditional and generative AI and said it plans to support the ecosystem of traditional AI.
"We don't see this as directly threatening our artists," Anderson-Miller said. "We see this as, if anything, it's helping our artists."
Esther Ajao is a TechTarget Editorial news writer and podcast host covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's information management team, driving coverage of artificial intelligence, unified communications, analytics and data management technologies. Together, they host the Targeting AI podcast series.