Adam Torres and Peter Voss discuss AGI.
Subscribe: iTunes / Spotify / Stitcher / RS S
Apply to be a guest on our podcast here
Show Notes:
Is Artificial General Intelligence “AGI” possible? If so, what would its effect be on humanity? In this episode, Adam Torres and Peter Voss, Founder, CEO & Chief Scientist at Aigo.ai, explore the Aigo.ai story and the future of AGI.
Watch Full Interview:
About Peter Voss
Innovator, Serial Entrepreneur, CEO, Engineer, Scientist. Peter life’s mission is to bring human-level AI to the world to optimize human flourishing.
This, the Holy grail of AI, will significantly lower the costs for many products and services; providing PhD-level AI researchers to help us solve problems like disease, energy, and pollution; and help individuals via expert personal assistants that will enhance productivity, problem-solving ability, and their overall well-being.
Peter been dedicated to this mission for more than 20 years. He is now leading a new major initiative to leverage our deep experience and standard-setting commercial Aigo technology to rapidly close the gap to wide-ranging human-level capabilities. He can’t think of anything more exciting, or that will have a greater positive impact on humanity.
About Aigo.a
Their founder, AGI Pioneer, Peter Voss saw a different future more than twenty years ago.
They coined the term “AGI” (Artificial General Intelligence) over 20 years ago and we’ve spent every waking moment since then developing AGI through Cognition.Today, as technologies like ChatGPT highlight AI’s incredible potential, they stand on the brink of a monumental AGI breakthrough.
AGI is set to transform the world as profoundly as fire, electricity, and the internet. To create a future where the quality of life for everyone in the society is radically improved as each one gets an increasingly intelligent and hyper-personalized assistant that significantly enhances productivity, engagement, problem-solving ability, and the overall well-being. In short, radical abundance for everyone in a very pragmatic way.
Full Unedited Transcript
Hey, I’d like to welcome you to another episode of Mission Matters. My name is Adam Torres. And today I am coming to you from Brentwood, California, and I’m at Studio Place LA, and I am excited because Peter Voss is coming to town and we’re going to have a whole lot of fun. First off, Peter, I want to say welcome.
Thank you for coming into town. Yes, it’s good to be here. So we’ve been trying to get this together, kind of working on this through Srini and some of your team for now, what, a solid, I feel like it’s been a year, year and a half or so. Yeah, it’s been a few years. Yeah, so we got a whole lot to talk about today.
I want some updates on what’s going on with Aigo. ai. We’re gonna talk about, which I’m really excited about, AGI, which I know you’re known for. For coining, which is artificial general intelligence. And we’re going to talk about where you’re at with that and a whole lot of other things. Plus, I mean, I’ve been looking at what you’ve been doing and I feel like I go, you got a big vision, a whole lot to accomplish.
And I know there’s going to be a lot of people involved, so we’ll get into that as well. But to get us kicked off, we’ll start this episode, the way that we start them all and all our audience at home knows that’s with our mission matters minute. So Peter, as you’re aware at Mission Matters, our aim and our goal is to amplify stories for entrepreneurs, executives, and experts.
We want to get their message out to the masses. So that’s what we do. Peter, what mission matters to you? Well, that’s very obvious. It’s AGI, Artificial General Intelligence. And that is to bring intelligence, artificial intelligence to the masses that will, Improve people’s flourishing and create radical abundance.
And that’s really the mission that I’m on and have been for some time. That’s, that’s exciting. And we’re going to get way more into this. And one of the things is we’ve heard about AI. We’ve heard all these things in the news, let’s say for the last couple of years where it’s mainstream I go and I go has been, you’ve been working on these fields over 15 years on these types of I’ll say not problems, I’m going to say opportunities for humanity and we’ll go into this But first, I’m just curious because I followed your career.
I followed your journey since I, since I goes come on my radar. And I just want to know, like, when did you consider or think like that science and becoming a chief scientist or otherwise, when did you feel like that was going to be a part of your destiny? Like, how did that come up for you? I don’t really know.
I can’t pinpoint it specifically. I think I’ve always been interested in scientific, scientific things that that’s been balanced out by also sort of this interpret, Entrepreneurial spirit that I inherently have. I don’t only want to discover things. I also want to have them work in the real world. So they’re really these these two sides of the coin that have always fascinated me now.
I found. I mean, I’ve done over 6000 interviews. I’ve interviewed a lot of people, and I feel like a lot of times there’s people when they’re really highly educated, and I’m not. Putting anybody in a box, right? Nobody wants that. But sometimes when people are really highly educated, they don’t have that other piece to where they want to go out and step out on the limb and take that chance of becoming an entrepreneur.
Not nothing against them, right? Everybody has their own path. But where did you get that piece from? Like that extra to make you want to go and see your vision of reality? Actually, I didn’t finish high school, so I got thrown in the deep end pretty quickly in terms of having to look after myself and, you know, Find work when I was 16 years old.
So it was kind of quite natural for me to figure out how to survive in the world. And that kind of just led me to figuring out how to create things to create more things. And there’s also been this curiosity to learn things to figure things out and to invent things. Yeah. And, and thinking about the concept of, I always tell people this, I’m like, for my longterm listeners, they’re like, Oh, Adam, being an entrepreneur, this, that, to me, I was just trying to make some money.
I didn’t know that’s the fancy word we use now, but it was just trying to provide value, trying to figure out a way where I was going to fit in the world. So thank you. Thank you for sharing that because that’s, I think, inspiring as well to our audience that are maybe in a certain similar. I mean, we’re just coming off.
So when we’re recording this, just for everybody at home, 2023 or assuming 2024, we’re already in another year. All right, post pandemic. We’re all kind of in this thing where we’re trying to figure out what’s next many times, and we’re all growing in our own pace. So I feel like that is very useful for our audience.
I’d usually like to try to work in at least one pay it forward question, and that’s maybe for some of our younger entrepreneurs or people that are just getting started. If you could go back to that Peter, that’s kind of whatever age group. I’m not saying SB 16 could be whatever you want to do. And you give them some advice and what’s on road ahead.
What kind of things would you tell them? Well, I can think about sort of the biggest regret or a big regret I have is that I didn’t start my own company until I was 25 years old. And I wish I had started earlier. So if you incline towards a sort of entrepreneurial life and not everybody Then I think the sooner you can start, you know, even while you’re at school, if you do a part time and get you get your feet wet, I think that’s really useful.
There’s nothing like actually doing it, being responsible for for making things happen for, you know, getting customers, getting a product out the door, getting paid for it, you know, all of that. So, yeah, that’s one one of the big regrets that I waited until I was 25 before I actually started my first company.
Thank you. Thank you for sharing that. I feel very that that helps me. So I’m sure to help our audience at home as well. So jumping around here a bit, I do want to circle back to the main event here today. So AGI, first off for our audience that aren’t chief scientists or maybe scientifically inclined, maybe talk to talk to us a little bit about just what that concept even means.
Yes, sir. Artificial General Intelligence is a term that three of us coined in 2002, and it was really to get back to the original dream of AI. The term artificial intelligence actually goes back 69 years. It was. 69 years ago, and the original idea was to build machines that can think, learn and reason the way humans do.
And there’s some very unique properties that human intelligence has. And that was the original dream. And they thought they could crack this in a year or two. Roughly 69 years ago. I was just going to say, Yes. Now, of course, it turned out to be much, much harder. Of course. So, what happened in the field of AI that Over the decades, it changed into a field of narrow AI, and there’s actually a very important, a subtle, but very important difference, and that narrow AI, it’s solving one particular problem at a time.
So for example, IBM’s Deep Blue is a good example of the world chess champion in the 90s. Where you’re solving the problem of playing a good game of chess, but it’s really the intelligence, the external intelligence of the data scientist, the programmer, how to solve the problem, how to build a machine that can solve that particular problem.
So in, in, in 2001, I got together with, with some other people who also felt the time was ripe for us to get back to that original dream, that original mission of of, of AI to build general intelligence with the machine itself. Is intelligent enough to figure out how to play a good game off of chess without somebody specifically programming to do that.
And that’s what artificial general intelligence really is, is about. Now, at the time, we didn’t know that the term would kind of take off. We just got together. We felt Let’s go back to that original dream. And we wrote a book on the topic called artificial general intelligence. And now of course, the term has been widely used, especially recently.
Yeah. So I liked, I want to stick in those early days a little longer. Cause I don’t, I don’t always get a pioneer on this show from, from this field. So was it fun? Like, was it fun having those conversations and dreaming and ideating in the beginning? Like, what was that like? Like even just in the early days, Well, actually, I came to America in 1995, and to me, it was like a kid in a candy store.
Yeah, that’s what I’m thinking. It’d be amazing. When I came to California, I was introduced to some of my heroes, you know, all sorts of books that I’ve read in the field of A. I. longevity, futurism and so on. So it was really, really very exciting time. And, you know, then of course you get down to doing the actual work.
It’s hard. At least the ideating part, you know, we get a little bit of fun in there. The dream big, you know, 20 plus years later, I’m still as excited about that mission. And and that brings us a little bit more to present day. Speaking of recent day, and I just wanted to set the set the stage for how long you’ve been working on this first off, but nowadays, I mean, and obviously I’m in media, you know, lots of lots of interviews and technology.
So nowadays, A. I. Is an Every headline and everything. It reminds me of it reminds me of kind of the original dot com. You add. com to any company and all of a sudden, right, it’s supposed to be worth something. So similarities, I know there’s some separation, but that’s the closest thing I can think of that.
I’m I’m hearing now, at least in media specifically to be specific there. So why does all this matter right now? Like, why does it matter? Are we at a pivotal time? Like, why does all this matter? Yes. So I think, especially with the release of chat GPT just over a year ago, people are getting a real taste for what AGI might be like a machine that can really do the things that humans normally could only do.
So I think it’s giving us a taste off of this, but at the same time people are also realizing people working in the field are realizing. That these large language models like chat GPT are actually not going to get us to this mission to this vision that we have of really, truly, generally intelligent machines, machines with, you know, human like intelligence, for example, just a few weeks ago.
The chief A. I. Scientist off meta. Younger couldn’t said large language models on off ramp to A. G. I. Wow, they’re distraction. They’re a dead end. Now you have to balance that against the hype on the one hand and the enormous, amazing things that large language models can do, you know, like summarization and writing poems and you know what?
What? The amazing things that they can do. But it’s also become clear that it’s fundamentally not. The right right approach to, to get us to a GI and I I, I got, I got to tell you my AI story. So once my mom called me and said something about like, how do you use that GTO or that? I was like the car mom, like, what do you mean?
She’s like the G something, you know, the, this, that I said, Oh, we’re getting close. Once my mom finds out something is happening, I’m like, we’re definitely a mainstay. Stream because we’re getting close. Yeah. And in fact, yesterday I was, I was talking to somebody and he said, my parents are really amazed with it because they’re using it.
They were using it for to create a thank you letter to some. And you know, they were amazed how well it did that. So yeah, absolutely. The technology can do some amazing things, but it’s not. The real thing. It is not the real thing. And that’s a big distinction. I want you to let’s get welcome to get as technical as you want to.
Let’s go a layer deeper on that because I don’t want to assume that everybody understands the concept of language models or why exactly it’s not the right thing. So maybe take us just a layer deeper there. Yeah. So before I do that, I think To, to really be able to make the contrast, I want to talk a little bit more about what AGI really is, what the implications of AGI are, because the enormity, having truly human level AI you know, there’s the general consensus that that’s going to be bigger than fire, bigger than electricity, you know.
Internet, how transformative it will be and can be. So, you know, for example, with real A. G. I. For one, you can have a scientist. So imagine one A. I. Training itself up to be a cancer researcher. You know, at your Ph. D. level cancer researcher, you could now make a million copies of that. Yeah. You have a million Ph.
D. level cancer researchers chipping away at the problem, pursuing different avenues and communicating with each other, sharing knowledge much more effectively than humans can, you know, without egos getting in the way and so on. And now you can take that across really anything that requires scientific research, you know, whether that’s better battery technology, you know, nanotechnology, nanobots that can go in our bodies and repair damage you know, any scientific field, you know, or even, you know, Governance.
I mean, we don’t seem to be very good at creating the right kind of governance for for society. So being able to apply more brainpower to problems of humanity. So that’s one area of sort of research scientific approaches. Another area is so dramatically reducing the cost of goods and services by automation and, you know, how that will create radical abundance.
I mean, just by. Tremendously lowering the cost of things. So that’s kind of the second area. A third area I’d like to highlight is what I call a personal, personal, personal assistant. So, the reason I put three personals there, they, they’re really, Three different important aspects off of the word.
The one is you own it. It serves your agenda, not some mega corporations agenda. And secondly, it’s hyper personalized to you. So it really gets to know your history, your goals, you know who you interact with and so on. And the third personal is, is the issue of privacy that you decide what you want to share with, with whom.
So I have that, you know, vision of, of applying AGI to giving everybody in the world this personal, personal assistant, you know, like a confidante, a helper that can help you make better decisions. A bit like a little angel on your shoulder, you know, that help, help you make, make bad decisions and so on.
So I think that we all need one of those. Hold on. Give me a hit. When can I get one of us? I’m in. All right, sign up for it in. So, you know, I think there’s so many ways in which true AGI can can increase human flourishing tremendously. So I think it’s, it’s really to understand how big a deal AGI is to then say why we should work on it, you know, so getting back to the question you asked is, You know, technically, sort of, why are large language models an off ramp, why do I agree with that?
So, there are a number of fundamental, inherent problems with the technology. And in fact, the name GPT already gives you a very good clue. G means generative. It makes up stuff. Uh, you know, it’s trained with 10 trillion pieces of information from the Internet, from all over the place, good, bad and ugly.
Sure. And so the system will create responses that are picked from that knowledge, from that huge knowledge. But it doesn’t know what’s right or what’s wrong. Mm hmm. So it just It creates, it generates, it makes up stuff. And that’s why these systems suffer from what people call hallucinations. You know, you can’t trust them, basically.
So, that, that’s one big problem. The second big problem is GP. The pre, P stands for pre trained. They are pre trained with these 10 trillion pieces of information at a cost of you know, Hundreds of millions. Well, I think the latest ones are like 200 million to train the system to do all the number crunching, to take this massive amount of information and kind of condense it and build a model that you can can use.
So that pre training also means that they cannot learn in real time. So if they hear something new, like the conversation we’re having, if it’s not something they already know, They can’t learn it. They can’t update that. So that’s a huge limitation, and it’s, it’s, it’s inherent. There are other limitations.
They don’t have metacognition. They can’t think about their own thinking. They don’t know what they’re saying. Yeah. They don’t know when they don’t know things. And I think the other thing that really worries people is the vast amount of electricity they need and computational power they need. I mean, we now have Sam Altman saying he needs to buy nuclear, build new nuclear fusion power stations to do this, you know, 7 trillion worth.
I mean, it’s, it’s insane to talk about the amount of sort of the environmental impact and the amount of power it needs. Whereas if you compare it to our brain, it’s 20 Watts of power, not 20 gigawatts. So So, you know, large language models that a whole approach GP pre trained and T talks about the technology transformer technology, which really locks in the need to pre train and you know, the whole cost expansion and so on.
So. That’s why it’s an off ramp, you know, and why we need something else. Yeah. I want to, I want to spend some time that we have here today going further into igor. ai and exactly where you’re at with the company and the technology. Maybe, maybe talk to us a little bit more about that. Sure. So you know, obviously the question then is if large language models aren’t going to get us to AGI, do we have an idea of what might?
And this is really what I’ve been working on for the last, you know, 20, 20 plus years. And so we’ve developed something. That is called cognitive AI. And it’s starting point is how people think and what’s important in human intelligence. And we can, you know, we can talk more about that. But we’ve basically been building both prototypes and development prototypes, but also commercial applications, very successful commercial applications.
So we’ve been alternating between Development on the one hand and commercialization. So the commercialization gives you that that reality check, you know, of, in theory, you can build something that sounds good, but it also has to work in the real world. So we’ve been alternating between developing the technology and then commercializing it and You know that we find that a very good model, and we believe that this cognitive AI approach will actually get us to AGI.
Now, are there any and not to put you on the spot, but are there any like use cases or companies that you’ve worked with up to this point with where you’re at? Oh, yes, absolutely. I mean, we’ve automated hundreds of call centers over the years very successfully. The one example, one recent example that I can give is 1 800 flowers group of companies, including Harry and David and popcorn factory and so on.
It’s about big brands. We all know and use and love. Yeah, it’s about 12 different companies. So we have very successfully automated their call center operation. Wow. And so for example, two months ago, just over two months ago, when we had Valentine’s Day, we actually replaced 3000 agents in their call center that normally they have to hire to handle this big spike because, you know, Valentine’s Day, everybody few days before, you know, calls in.
And we’ve been able to do that with a 90 percent self service rate, you know, where people can at any time they can ask for a human, but our system is so good. It just does the job. There’s no wait time. It has access to all of your information. So you can really get some of the people listening right now.
They’re like that order from 1 800 flowers. They’re like, Wait a minute. What was going on last year? It’s amazing. Continue, please. So, you know, we very much on the commercial side. You know, there’s still a lot of integration. And, you know, we’re not at human level intelligence. They’re yet with a global With the current commercial technology, that’s why we’re so, so keen to really focus on moving that technology forward and taking it all the way to, to human level.
Yeah. Now what you’re describing is amazing. And I just have to think about like, we’re, you know, we’re, we’re a small media company. I like to think we punch above our weight. Right. But we’re not trying to like, Change to this point. So when I think about the amount of money or people or resources or collaborations that’s needed to make this vision a reality, like, what does that look like?
I just can’t even fathom, like, what does that type of thing look like? Yeah. So the amazing thing is with cognitive AI, as opposed to generative AI, Cognitive AI doesn’t need 10 trillion pieces of information to, to get trained. It’s, you know, maybe a few million pieces of information. So like a million times less which means it also doesn’t cost a hundred million or 200 million to train the model.
So we talking about a much, much lower cost to, to produce that and not have the environmental impact, all the negatives that go with it. So, so it’s a viable alternative, like it’s a. Well, yes. Yeah, the alternative isn’t viable. Yeah, exactly. This is the main, that’s what I mean, yeah. Isn’t viable. So it’s a viable alternative.
Absolutely. And so, you know, we, we currently on the development side, we have a very small team there. Only 10, 10 people on the development team right now. So we are obviously wanting to increase it, but we’re not, we’re not talking about hundreds of millions of dollars, you know, we’re talking about tens of millions of dollars, not hundreds to actually get the technology expanded, improved, you know, from, from where we are.
To get to human level, what, what, what’s a good type of and you don’t have to say any names, but what, what type of partner or like in this venture would you be looking for? We’re like, what do you think would be a good fit? Do they have to have like a previous experience in that part of field, whether it’s investor or otherwise, like what, what do you feel like would be a great partner?
That’s a very good question. Now with AGI being such a powerful technology we are really looking for partners, for financial partners who share our. In terms of helping humans flourishing, because you can use it, you can focus your development and your company development on how can we make the most money?
The quickest? Yeah that I’m not interested in. This is a multi trillion dollar opportunity. So the money is not a not even a question. But what is important is to develop the technology and to deploy it in a way that’s really going to be beneficial to humans. You know, for example, that personal personal assistant, when I say the first personal, you own it, it serves your agenda is very different from What all the big players are doing when they try to give you a so called personal assistant, they control it.
It serves their agenda, you know, and like, you know, Alexa and Siri and, you know, so it’s having the right kind of partner who, who really wants to make AGI happen. That’s kind of really important for us. And I like that you bring this up and you didn’t use this word, but it’s one that is often linked with around that conversation and it’s AI and ethics and ethical AI.
And there’s, I’ve heard a lot of different terms. For humanity, right? Like what’s going to happen next? I’m interested in hearing a little bit more about just your concept of what, what you feel like AI ethics, like, you know, for the other leaders out there that may watch this even in the future, like, and, and they’re developing technologies or other things like their responsibility.
Like any thoughts on that? Yes, absolutely. In fact before I started working on AI I actually took off five years to study philosophy, psychology and AI to really understand so many different aspects about thinking and living and values and, and so on. So in philosophy, I studied, for example, epistemology, theory of knowledge, how do we know anything?
What is reality? What’s our relationship between our knowledge and reality? But I also looked into how do we know right from wrong? Ethics. So I was fascinated with whether I actually spent six months trying to really understand, Free will and what three isn’t, isn’t because that’s kind of key to ethics.
You know, if we don’t have control of ourselves or to what degree do we have control of ourselves? To what degree can we be responsible for our actions? And then how do we know good from bad? So I studied these various fields and of course also in the field of AI and cognitive psychology How do children learn?
How does our intelligence differ from animal intelligence? What do IQ tests measure, you know, so really to get a deep deep understanding of of intelligence and You know to get back to you to your your question is It’s unfortunate that in Hollywood. You only seem to be able to AI is the bad guy and destroys everything You know, we have that kind of preconception.
Oh, yes Well, AI is going to kill us all, but really, when you, when you, when you analyze how we actually build AI, not some theoretical model that you can build for yourself that can scare you to death, but if you actually look at how we build AI, we build AI to really help us to serve us. That’s actually how, how it’s built.
And the rationality of AI, Actually helps us make better decisions and helps us avoid things that we would consider bad or even immoral because rationality tells you in a way that You know, for example, that a zero sum game doesn’t make sense. You really want to engage in interactions where both parties benefit.
And so those are the kind of ethics that come out of a rational approach. I’ve, I’ve actually written on sort of a rational approach to ethics and, and how, you know, how that, so that informed, informed me as well, in terms of how I see the future of AI and how AI can help us be more rational and be more moral.
Amazing. And to take those five years and to do what you did, I mean, you’re a man of many surprises. I’m enjoying getting to know you better and getting to know more about like the heart behind the company and the vision. So thank you, Peter. Last question here. If I can, if we could allow ourselves to dream where you’re an entrepreneur, I’m an entrepreneur, many entrepreneurs at home that are watching this as well.
Your vision. Like the big, the dream, the what’s next, like of it for you. Like if, what, like dream for a moment, like what’s next? Yeah. Well, I think that’s very clear. I want a GI to happen as soon as possible, and I’m looking for people who share that vision to actually make it happen. And so we can all benefit from a I to have this radical abundance and increase human flourishing.
Peter, thanks again so much for coming on the show and to the audience at home. If this is your first time with Mission Matters, I hope you really enjoyed this episode. I look forward, Peter, to seeing AGI come to the masses and come to fruition and, and to have you on the show as one of the original founders who coined that term.
Absolutely amazing. And again, to the audience at home, if you haven’t already done it yet, hit the subscribe button. That subscribe button. This is a daily show each and every day. We’re bringing on new guests, new thought leaders new ideas, and hopefully new inspiration to help you along in your journey again, don’t forget, hit that subscribe button.
And we’re going to put all of I go. ai’s information in the show notes so that you can just click on the links, head right on over and follow their journey.