Why ‘generative AI’ is suddenly on everyone’s lips: It’s an ‘open field’

If you’ve been closely following the progress of Open AI, the company run by Sam Altman whose neural networks can now write raw text and create raw pictures with astonishing ease and speed, you might skip this post.

On the other hand, if you’re only vaguely concerned about the company’s progress and the growing traction that other so-called “generative” AI companies are suddenly gaining, and want to better understand why, you might benefit from this interview with James James Currier, five-time founder and now a venture capitalist, co-founded NFX five years ago with several of his serial founder friends.

Currier belongs to the camp of those who keep a close eye on progress — so closely that NFX has made significant relevant investments in what he describes as “generative technology” and is attracting more teams’ attention each month. In fact, Currier doesn’t see the buzz about the new issue of artificial intelligence as less hype than a realization that the wider startup world is suddenly facing a very big opportunity for the first time in a long time. “Every 14 years,” Currier said, “we have a Cambrian explosion like this. We had one on the internet in ’94. We had one in 2008 around cell phones. Now we have another in 2022. .”

In retrospect, this editor wished she had asked better questions, but I’m learning here too. Below are excerpts from our chat, edited for length and clarity. You can listen to our longer conversation here.

TC: There’s a lot of confusion about generative AI, including how new it really is, or whether it’s just becoming the latest buzzword.

JC: I think what’s happening in the AI ​​world in general is that we have a feeling that we can have deterministic AI that will help us identify the truth of things. For example, is that a piece of debris on the production line? Is this an appropriate meeting? Here, you use AI to determine something the same way a human determines something. This has largely been the case with AI over the past 10 to 15 years.

Other sets of algorithms in AI are more of these diffusion algorithms, which are designed to look at a lot of content and then generate something new from it, like, “Here are 10,000 examples. Can we create the 10,001st similar example?

Until about a year and a half ago, they were very fragile, very fragile. [Now] Algorithms just got better. But more importantly, the corpus of content we’ve been looking at gets bigger because we have more processing power. So what happens is that these algorithms follow Moore’s Law — [with vastly improved] Storage, bandwidth, computing speed—and suddenly become capable of producing something that looks very much like a human would produce. This means that the face value of the text it will write, and the face value of the drawing it will draw, looks very similar to what a human would do. And all of this has happened in the past two years. So it’s not a new idea, but it’s new on that threshold. That’s why everyone looks at this and says, ‘Wow, this is amazing.

So it’s computing power that’s suddenly a game-changer, rather than some previously missing technological infrastructure?

It doesn’t change suddenly, it just changes gradually until the quality of its generation reaches where it makes sense to us. So the answer is usually no, the algorithms are very similar. Among these diffusion algorithms, they get better. But really, it’s about processing power. Then, about two years ago, [powerful language model] GPT came out, which was an on-premise type of computing, and then GPT3 came out [the AI company Open AI] Can do [the calculation] Serve you in the cloud; because the data models are much larger, they need to be done on their own servers.you just can’t afford it [on your own]. At that point, things really turned upside down.

We know because we’ve invested in a company working on AI-based generative games, including “AI Dungeon,” and I think the vast majority of GPT-3’s computation comes from “AI Dungeon.”

So does “AI Dungeon” require a smaller team than other game makers?

This is definitely one of the biggest advantages. They don’t have to spend that much to store all the data, and they can create dozens of gaming experiences with a small group of people, all of which can leverage that data. [In fact] The idea is that you’re going to be adding generative AI to older games, so your non-player characters can actually say something a lot more interesting than they are now, although you’re going to have a completely different gameplay experience from the AI ​​going into the game than it is now There is AI added to the game.

So is there a big change in quality? Will this technology stall at some point?

No, it’s always getting better. It’s just that the incremental differences get smaller over time because they’ve gotten pretty good,

But another big change is that Open AI isn’t really open. They produced this amazing thing, but then it didn’t open and was very expensive. So groups like Stability AI and others got together and they said, ‘Let’s make an open source version of this. By then, costs had dropped 100-fold in just the past two or three months.

These are not forks of Open AI.

None of these generative techniques will be built solely on Open AI GPT-3 models. That’s just the first. The open source community has now copied a lot of their work, but in terms of quality, they’re probably 8, 6 months behind. But it will get there. And because the cost of the open source version is one-third or one-fifth or one-twentieth of the cost of Open AI, you’re going to see a lot of price competition, and you’re going to see a proliferation of these models that compete with Open AI. You could end up probably You get five, six, eight, or maybe 100.

Then on top of these will build unique artificial intelligence models. So you might have an AI model that’s really focused on writing poetry, or an AI model that’s really looking at how you make visual images of dogs and dog hair, or you’re going to have an AI that’s really dedicated to writing sales emails Model. You’ll have a whole layer of these purpose-built AI models built specifically. And then on top of that, you’d have all the generative technology, which would be: how do you get people to use the product? How do you get people to pay for a product? How do you get people to log in? How do you get people to share it? How do you create network effects?

Who makes money here?

The application layer where people are going to go after distribution and network effects is where you are going to make money.

What about the big companies that can integrate this technology into their networks? Wouldn’t it be difficult for a company that doesn’t have that advantage to just pop up out of thin air and make money?

I think what you’re looking for is something like Twitch that YouTube can integrate into their model, but they don’t. Twitch has created a new platform and a valuable new culture and value segment for investors and founders, albeit difficult. So you will have great founders who will use this technology to their advantage. This will create a seam in the market. When the big guys are doing other things, they will be able to build multi-billion dollar companies.

The New York Times recently ran an article in which several creatives said the generative AI applications they use in their fields are tools in a broader toolbox. Are the people here naive? Are they at risk of being replaced by this technology? As you mentioned, the team working on “AI Dungeon” is smaller. That’s good for the company, but bad for the developers who might otherwise be working on the game.

I think with most technologies, people have an uncomfortable feeling [for example] Robots replace jobs in car factories. When the internet came along, many direct mailers felt threatened that companies would be able to sell directly without using their paper advertising services.but [after] They’ve embraced digital marketing or digital communication via email, and their careers may have taken a huge bump, where their productivity has increased, as has speed and efficiency. The same thing happened with online credit cards. It wasn’t until 2002 that we felt comfortable putting our credit cards online.But those who accepted credit cards [this wave in] Better performance from 2000 to 2003.

I think what’s going on right now. Writers, designers and architects who are thinking about and embracing these tools to 2x or 3x or 5x their productivity will do very well. I think the whole world will see an increase in productivity in the next 10 years. For the 90%, this is a huge opportunity to do more, do more, create more, connect more.

Do you think it was a mistake for Open AI not to do this? [open source] What is it building, given what’s going on around it?

Leaders ultimately behave differently than followers. I don’t know, I’m not in the company, I really can’t say. What I do know is that there will be a huge ecosystem of AI models, and it’s not clear to me how the AI ​​models will stay differentiated because they all tend to be of the same quality, it just becomes a price game. In my opinion, the winners are Google Cloud and AWS because we all produce stuff like crazy.

Open AI may eventually move up or down. Maybe they become like AWS themselves, or maybe they start making specialized AI and then sell to certain verticals. I think everyone in this space will have a chance to do well if they navigate it right; they just have to be smart.

By the way, NFX has more to read about generative AI on its website; you can find it here.

Leave a Reply

Your email address will not be published. Required fields are marked *