How to Capitalize on Artificial Intelligence

Published Feb 7, 2024, 1:08 PM

Watch Carol and Tim LIVE every day on YouTube: http://bit.ly/3vTiACF.
Eric Siegel, former Columbia University Professor, discusses his book The AI Playbook: Mastering the Rare Art of Machine Learning Deployment.
Hosts: Carol Massar and Tim Stenovec. Producer: Paul Brennan. 

This is Bloomberg Business Week with Carol Messer and Tim Stenebek on Bloomberg Radio.

All Right, when someone says AI, they can be referring to a lot of different things, you know of that, right, They can be talking about chatbots such as chat Gibt or Google Bard, which are examples of LM's or large language models. We talked. We talked about this a little bit with Mendeep earlier.

Exactly. Okay, what about when it comes to AGI, artificial general intelligence. It's the holy grail of AI. It's not around yet, but everyone is working on it. What that means is that AI can perform as well or better than humans on most tasks.

All right, so machine learning is something else. It's the focus of our next guest, new book. Eric Siegel is a consultant and former Columbia University professor. He's also the author of a new book. It is entitled The AI Playbook, Mastering the Rare Art of Machine Learning Deployment. It is out today. Eric joins us on Zoom from the Bay Area. Eric, congratulations, Let's start with the basics, because I think sometimes we all throw around phrases assuming everybody gets it and they don't necessarily. How do you define machine learning?

Well, thanks Carol. Machine learning is technology that learns from experience in order to make predictions in order to target and improve large scale operations. So who to target to for marketing based on predicting who's going to buy which transaction? To audit for fraud based on predicting which is going to be turned out to be fraudulent. Who to approve for credit application based on who's going to be the most reliable debtor, etc. Etc. Which satellite to investigate for potentially running out of a battery based on whether it's going to where to drill for oil? Pretty much this is the type of AI, and you could call it predictive AI or predictive analytics to differentiate it from generative AI. This is the type of AI you turn to when you want to improve pretty much any and all of your large scale operations, which are which consists of many decisions, and prediction is the holy grail for improving decisions.

Would you consider the way that content is surfaced on a social media platform like x slash, Twitter or Instagram? Would you consider that machine learning?

Oh? I see? Yeah?

As far as the ordering of your feed, yeah, like if I log in, you know, if I start to follow someone new, Like if I follow someone new on Instagram and then I log in Instagram again, that like an old post from that person is going to be like the first thing that comes up because Instagram thinks I'm now interested in that.

Right, that's a prediction task. And the same thing with the ordering of your Facebook feed the default feed, assuming you leave it at that, and the same thing in the ordering of your Google search results. It's all based on predictive models. That's what machine learning generates from data. Is a model that captures the patterns, that's the discoveries it's made from data. That helps it predict. Predict is the action. So you're predicting in order to say which of these content ten items is going to be of most interest or most relevant, whether it's for Internet search like with Google, or the ordering of your news feed, or the ordering of your search results for properties on Airbnb.

All right, whiteboard it for me. So AI big umbrella machine learning a form of AI. How do we stack up and make sense in this environment of because we do throw around AI, which has been around for a long time, but now we're talking about you know, generative AI. Give me the whiteboard on it. You're teaching a class, like, how do you lay it all out in terms of AI what it means or machine learning how it'll play a role.

Well, the typical hierarchy is that machine learning is part of AI, and that's a more bigger umbrella term. But my opinion varies from a lot of the mainstream, although more people are jumping on board that AI is really an amorphous, ill defined term. We're trying to ascribe the word intelligence to a machine. That's quite problematic to nail down. But if you don't define something well, you can't pursue it. For engineering, AI is the story we hear about machine learning is the technology that we have, and machine learning in all those ways. I just describe where you're predicting for each individual customer healthcare client, as far as their disease progression, or where to drill for oil which I'll like to investigate in which transaction to audit on that individual level. It's the same core technology that drives generative AI for its ability to generate first drafts of writing or code, or of images, and in those cases what it's doing is predicting what should the next word be Okay, well it's actually the next token, but it's on that level of detail. What should the next word be? How should I change this individual pixel in an iteration? As I'm rendering this image, I being the computer in this case. It's the same core technology learning from data to predict.

You know, it's funny, Carol, when you're so just a little behind the scenes. Eric, It's like, we use this Google doc to prepare the show, and we both work on it and work on different things. And Google now has predictive texts, And when I was writing this one for Eric's intro, it actually got some stuff wrong, which I thought was really funny because here we are talking about the technology that it's getting wrong. When I'm like, you know, how do you writing a question in the doc? I mean, have you noticed that?

Yeah? No, you're absolutely right if you're very careful, because it jumps ahead right if you don't catch it. So, yeah, how do we make sure all this stuff that were it feels like increasingly moving towards relying on make sure whether it's machine learning, that it's accurate and predicting right things or the right outcomes or the smart outcomes. How do we make sure I know. Is it just the data sets that go into it? Is it that simple and that complicated? No?

I mean no, no, because we're not headed with definitiveness towards reliability when it comes to that type of generated text. Look, these models are so seemingly human like. They're amazing. I spent six years of my career in the Natural Language Processing Research Group at Columbia, and I never thought I would see what we're seeing today. It's so amazing. The way it creates often cohesive content, can talk about anything, use expressions the humans use, and because it's trained over so much data and the actual modeling itself is so advanced. However, what it's trained to do is essentially on that per word level of detail, which really gives it a human like aura. But that doesn't mean that it was developed to pursue higher order human goals like being correct. That's a whole nother thing. The fact that it's seemingly human like doesn't mean it's a step towards general human behavior. So you know you earlier Tim mentioned artificial general intelligence. I'm actually sort of a disbeliever in that.

I don't think it's Why are you a disbeliever.

Yeah, I don't think it's technically impossible that someday, But I do not believe that any of the advancements, as impressive and valuable as they are, actually represent a concrete step towards general human level capabilities. Where the machine is basically, let's call it what it is in the story, it's an artificial human. You can onboard them like an employee at like a human employee, and let it rip. They can run a fortune five hundred company, whatever it is. That is a science fiction fantasy. I do not believe that we're taking concrete steps in that direction.

So you're not concerned about the rise.

Of the bots, right, And that's they call it critic hype, right where you say, hey, look this stuff is so good it could kill all of us. It's really just another way to sort of mismanage expectations. And there's a variety of reasons why people do that. Some genuinely believe it. I'm trying to calm the world down a little bit here. The stuff is extremely valuable and in what it can do today, and the story that it's becoming human like is it over sells? In other words, it's hype and that gap between what's real and what's plausible from the stories is bad. And it's with mismanaged expectations that's when you have the downfall, the disappointment. The disillusion meant also in the more extreme case called an AI winter, and the problem there is you throw the baby out with the bathwater. You throw the value of generative AI for first drafts and predictive AI, which by the way, is still a much bigger industry right now, out with the bathwater.

You know.

Our producer Paul Brennan said, Yep, you're gonna want to talk to this guy for a long time. We have unfortunately run out of time, so promise you will come back soon because I feel like this is a conversation we need to continue. Just making so much sense, Eric, thank you so much. Eric Siegel, he's a consultant, former Columbia University professor. He's got a new book at the AI Playbook, Mastering the rare art of Machine Learning Deployment.

Bloomberg Businessweek

Carol Massar and Tim Stenovec bring you reporting from the magazine that helps global leaders stay a 
Social links
Follow podcast
Recent clips
Browse 4,660 clip(s)