AI and Vibe Coding News for the Week of September 1

David (00:00)
Hey everybody. Welcome to Prompt and Circumstance. My name is David. And today we're gonna get you caught up on some AI news.

Ilan (00:03)
and I'm Ilan.

Alright, what are we talking about today? What kind of news do we got David?

David (00:25)
I've got an interesting one, which is related to OpenAI. And this has to do with a new subscription tier that ⁓ seems to be something that they're offering in select regions of the world. It's called the Go tier. So we might be familiar with the Plus tier, the Pro tier. And this is going to be

coming to perhaps like India and other select regions of the world. And it's going to cost five bucks a month, American, at least that's the conversion, which I think is great because it will give people more access to GPT-5 and other models, some limited access to deep research, which currently I don't think you have that on the free tier. So, you know,

Ilan (01:12)
Great.

David (01:15)
With the app being so transformative, I think it's great that people are thinking about not leaving parts of the world behind, as things might have happened in the past.

Ilan (01:23)
Mm-hmm.

You know, you say that, but I recall a talk I attended by a product manager at ⁓ Spotify who led their expansion into India. And one of the things that he was talking about was in terms of pricing models for Spotify in India. Now, I don't know if any of this happened, but he was saying that they had to think about the fact that

many of their potential customers were not able to support a monthly subscription. That that actually wasn't available to them. You if they're living paycheck to paycheck, so like week to week, then having a monthly subscription just wasn't a cost they could sustain. But then they were looking at other models, like for example, selling pay as you go cards, like you would a phone card.

where somebody could top up how much Spotify ad free time they had on their card or other similar ways of considering getting people to be able to pay for the service within the structures that really work for them. And so I wonder, even going to a cheaper tier, $5 a month, let's say, has OpenAI

David (02:29)
Mmm.

Ilan (02:54)
done their, really done their homework deeply, or they just like make it cheaper. Let's see how many people come on and see what happens.

David (03:02)
That's a great question. It makes me think about the actual price anchor of $20 a month. well, actually, do you know how they landed on that dollar amount?

Ilan (03:09)
Mm-hmm.

I remember reading about this, you'll have to refresh my memory.

David (03:17)
so if I recall correctly, they had to monetize this and the person in charge was like, how much do I charge? I don't know. ⁓ And so just that person quickly did a kind of Google search ⁓ because they didn't have web search back then on, on GPT That person, you know, just did a quick search and ⁓ it landed, they landed on the fact that, okay,

20 bucks a month seems to be a thumb in the air, ⁓ the right number. So it wasn't that they did any kind of rigorous price sensitivity analysis or anything like that. No, like van Westendorp kind of a chart there. So that's really interesting that now because that's where the price is anchored, everyone else, well, know, Claude and so forth, like they almost need to follow suit.

and it's led some pricing experts to say like, wow, they, are leaving a lot on the table. ⁓ because, you know, if you think about somebody who would be perhaps, ⁓ an active plus user paying $20 a month, ⁓ you know, like how much would they pay, ⁓ for you not to take it away? Yeah. So, I mean, and that's, that's a bit closer to the real price of it, ⁓ which is probably a little bit greater than 20 bucks a month.

Ilan (04:20)
Mm-hmm.

Mm-hmm.

Yeah, that's, I mean, that's true. And that's probably where you're going to see behavior kind of like what Spotify has done over the years or Netflix has done, right? Where you, when you create a product, ⁓ anchored around a certain price, and then you have a lot of user satisfaction and very low churn, then that allows you to then start increasing that price over time as a method to

get to more profitability. you know, that's what Netflix has done where I think that their original streaming only price was around 9.99 a month. But now ad free, it's like 20 something a month, at least in Canada.

David (05:21)
Mm-hmm.

Mm-hmm.

Ilan (05:28)
You what they've realized is that as they increase the prices little by little over time, the churn numbers are not going to go up that much, at least not with respect to the increase in profits. So I'm sure that we'll see that kind of behavior from OpenAI over time.

David (05:36)
Mm-hmm.

Ilan (05:44)
This is a great time for me to tell you about our first sponsor for this week, Colab. go with your idea to the market, make sure that people actually want it. And then you can even get them to build that for you and hand over a product with your first paying customers.

you can take it on from there and run your company. click on the link in our show notes to find out more and let them know David and Ilan sent you for $250 off the validate and build product.

David (06:13)
Yeah. Yeah. mean, you know, thinking more about, ⁓ sort of different regions of the globe, ⁓ it makes me think about, ⁓ also some recent news, ⁓ about, ⁓ something called Latam-GPT. So Latin America GPT, ⁓ where it's, ⁓ it's a project that's been, that's being led by the Chilean National Center for Artificial Intelligence. ⁓ and their goal is to create, ⁓ an open source AI model.

that serves Latin America. So, you know, the thinking being that, you know, what's currently out there doesn't, doesn't a lot of the things that would be valuable or important to the people from that region.

think it's really it's great that that they're doing this. ⁓ I do get the sense that, know, because the large language models, they kind of scrape the internet and that being sort of North American centric has led to biases in the models that get created. So it's great to see there's a group doing this.

Ilan (07:13)
Mm-hmm.

Yeah, 100%. This does kind of tie in with a one of the topics that I'm bringing with with several articles related to it, which is around small language models. ⁓ So not quite the same, but the idea that you can have models trained on maybe a couple billion.

as opposed to hundreds of billions of parameters, which is what the leading models from OpenAI and Anthropic and DeepSeq even are trained on. And that these can provide higher accuracy than LLMs for the specific applications on which they are trained. There's a lot of interesting work out there on ⁓

on their latency and on the accuracy that they provide, as well as their use in different systems. And I think, you know, the tie in here is potentially creating models that are more targeted to a certain group of users as opposed to an LLM, which is ⁓ general by design. And where does that, as I think about it, I think a lot about where does that fit?

into the world of product building,

David (08:48)
Yeah, that's great that you bring that up. know, with everybody getting so excited about LLMs, you know, the experience just, it kind of assumes almost that, you know, you are going to send some message to a server and just wait for it to come back as opposed to it being processed locally. I think one great example of where a small language model could help everyday life.

would be in the keyboard. I don't know how many times this has happened to you, but sometimes my keyboard will auto-correct something where it's like, okay, look, you know the context of this sentence, or at least you ought to. And how could you, how would that ever make sense as a prediction of the next word? ⁓ And so if we're able to get small language models to ⁓ generate that next word prediction for a keyboard, ⁓ then I think...

There'd certainly be fewer frustrated people sending texts. At least I'd be happier. Yeah.

Ilan (09:53)
That's a funny way of thinking about it. Where I see the most research in this topic is actually closer to my educational background, which is in mechanical engineering, and thinking about control systems. So control systems are deeply researched over many,

Decades and it's basically in manufacturing this is how do you set up a system of controls that take measurements and feedback those measurements or that data into a central

processing unit for your entire manufacturing line. And in the old days, that wasn't actually computer. That was like somebody by hand doing math because they got some data from a line and they would calculate these, you know, they're called like CPM values to see whether your process was in control or out of control. And there's a whole, there's a whole line of research around feedback mechanisms and feed forward mechanisms.

There's a lot of research on small language models and their ability to act as the processor or decision maker for these systems.

David (11:08)
Hmm.

Ilan (11:13)
Even Google and IBM and others have released more small language models that one can pre-train and that can easily access these control system data points or APIs ⁓ or even work with retrieval augmented generation or RAG, which we've talked about in other episodes.

and really give you accurate results very fast for these kind of robotic systems that require decisions to be made in near time to be able to work in a manufacturing setting.

David (11:52)
It also makes me think about how with small language models, ⁓ in my mind, it would be that the language model decides maybe which tool to call in terms of calculation rather than doing the calculation itself, which I think is an important distinction. Because I think some people who might not be as familiar with the actual tech of AI models themselves might think that, it's the model that just does it. It's just smart.

as opposed to just deciding, I should call this function or this tool with these parameters. And that's expressed in a language, right? So.

Ilan (12:29)
Great.

Yeah, very interesting. right, what else you got for us today, David?

David (12:34)
All right, so there's this MIT report that has been getting a lot of attention because of the statement that it's making, which is that 95 % of AI projects fail. And so here's the report. It's called the GenAI Divide, state of AI in business 2025.

Some of the insights in this report are quite interesting. ⁓ And it really goes to show that you can't just throw AI at a problem and hope that it fixes it magically. It's simply not that. It's a tool and you need to know how to use the tool.

So here's an interesting quote from one of the CIOs that they had interviewed, which is that, you know, they've seen a dozen demos this year, maybe one or two are genuinely useful, and the rest are just wrappers or science projects, meaning that it's something...

That could just be a wrapper around GPT, for example, ⁓ like an AI note taker, maybe, ⁓ or a science project in terms of It's not proven. And there's a lot of risk and potentially a drawn out ⁓ project there, like an R&D sort of a thing.

Ilan (13:41)
I've also read this report. As you said, it's making the rounds and that quote is interesting and it reflects a reality that I've also seen in my experience, which is that a lot of these things are, you know, wrappers and science experiments.

I would say they lean more to the science experiment, right? You're trying something out. You want to test it. You want to see if it brings value. Now I quibble more with the wrapper terminology there because ultimately everything, every technology is a wrapper around some other type of technology. This isn't an original opinion or thought, but you could say that

David (14:23)
Hmm.

Ilan (14:28)
Netflix is a wrapper around AWS that ⁓ without that kind of cloud scale architecture that Netflix couldn't exist. So at the end of the day,

The important thing here is that to create products that provide value, it doesn't really matter what the underlying technology is as long as they're providing value. And a simple wrapper can provide value as long as you understand the problem. Now where I think that you're probably going to go with this is a point that we've made before, which is that this just highlights the importance of

product management and product managers in the product development process You need people to understand what are the problems that users are facing in order to deploy that technology effectively.

Now I would say about pilots or projects failing, you think about it the right if you think about it from a lean product management standpoint, that's probably a really good measure, right? If 95 % of your pilots are failing, great. That means that you haven't wasted resources on building those products to scale.

David (15:34)
Mm-hmm.

Ilan (15:47)
when they're not actually going to provide value to users.

David (15:51)
Yeah, it's definitely lots to think about I just thought this was really interesting here, this other quote, which is that,

⁓ the head of procurement at a major CPG firm, ⁓ is saying that, they rely heavily on rec peer recommendations and referrals from their, their network that that's interesting, ⁓ because, know, in, in this market where, there is

such a large number of different AI companies offering different AI solutions. How do get through the noise? ⁓ And to me, this suggests that the way that you do it is, well, first off, you find a reasonable problem to solve and you do a good job at it such that that person or that company is willing to recommend you to somebody else.

Ilan (16:43)
I mean, it's back to the, right.

David (16:43)
Which I mean, I think, you know, even without AI, like that's what you ought to be doing.

Ilan (16:48)
so it makes me think about one of the more popular product metric frameworks, which is the pirate metrics, which starts with acquisition and goes all the way through to referral, right? Referral is the bottom of the funnel of this metric framework.

And that's really what you are driving towards if you have a very successful product and you want to make it a flywheel is that your users are then referring other users into the application. that's eventually once you have solved for the other metrics, that's the last metric to solve for. So, you know, the, the, the product fundamentals stay true here.

David (17:32)
Absolutely.

Ilan (17:33)
let's tell you about one of our favorite tools is an AI agent which sits on top of your data stack and allows you to ask natural language questions to pull out data insights.

It's the perfect tool if you're looking to understand market segments and user as well as trends within your product.

Click on the link in our show notes to try it today and let them know that David and Ilan sent you for two months free. a $1,200 value for our listeners.

David (18:06)
All right. So related to that report is this article about ⁓ the CEO of Accenture, Julie Sweet, talking about, you know, her perspective on ⁓ red flags for AI projects. And in it, she lists these three red flags, which is applying legacy process.

too much focus on projects that don't move the needle, like collaboration, and jumping into impractical AI projects. So the one thing that stood out for me here is applying legacy process because of how different the experience can be when something is AI centric, simply, you know, layering it on top of the existing process rather than just taking a step back and thinking about what's the

job to be done. How do we get to the end point from here with this new tooling? ⁓ I think that that's a really important one that stood out to me. And I think that I wonder whether that was a lesson that some of those 95 % of AI pilots, I'm wondering if that's what happened

Ilan (19:00)
Mm-hmm. Right.

Somebody I work with put this really well, that businesses are really good at optimizing processes to keep doing things the way they are doing them now. And the implication of this is that if you want to transform, if you want to change, then you need to step out of your existing processes. As you said, David, take a step back and think through what are we actually trying to accomplish here? And

aim towards that and AI tooling can often help that. Now on the other side, One of the things that he brought to us to think about was that lack of process is also

a strong indicator of failure in AI adoption in terms of tooling. That people often want to throw AI at a problem where they don't have an existing process or they don't have clear decision points or decision steps that need to be made. And they want to band-aid over that using AI, but AI is not magical. It's just a tool.

David (20:07)
Hmm.

Mm hmm. Yeah, that's a really good point. Yeah, we're thinking about.

Ilan (20:33)
⁓ Now, in terms of how AI tools fit into the world that they need to be part of, another topic area that really interests me, where there have been some movements in the last few weeks, is on world models. And this came onto my radar watching a talk from Yann LeCun who's one of the godfathers of LLMs now at Meta. And

He was talking about how he's kind of left behind LLMs, like LLMs are a thing of the past for him and that he's really interested in world models. And what these are, are models that literally can build a world or perceive an entire world the way a human would. So not just through language, but through vision, through audio, ⁓ as well as text and other methods. And through that perception,

begin to understand and learn how to do tasks in that world. And Google recently released their Genie 3 world model, which can generate a few seconds of a 720p 3D world for a model to interact with based on some text input.

And where this is mostly relevant in terms of where industries are going is training robots, for example, that are going to be powered or have an AI as the central mind for them in how do manufacturing flows work or how do the operations of a whatever

facility they're going to work in. What do those look like today? What kind of people do they need to interact with? What are the tools that they have access to? What do they have to be careful of? What do they have to be aware of? And I find this really fascinating. And I wonder if, know, kind of opposite to my small language model idea earlier, this is going to be like the next evolution where if ⁓

LLMs are trained on hundreds of billions of parameters. This is going to be on trillions of parameters that are constantly changing because the world around the model is constantly changing.

David (23:10)
That makes me think about this company called VSim. Have you heard of it? So, so what, what they do is a physics simulation for robotics training. ⁓ So, I believe it was, it was founded by somebody who, some people who used to work at Nvidia. ⁓ So, ⁓ yeah, like the, and, it's, it's, it's valued at

Ilan (23:14)
No.

Okay.

David (23:36)
100 million. right. Small, small, small potatoes, just 100 million. ⁓

Ilan (23:39)
small pennies

There are actually some estimates that say the shift away from manufacturing jobs in North America, that there are a lot of political conversations around today around reshoring manufacturing, that Delta in terms of the number of those jobs available was going to be the same anyway. It might've been more rapid because of offshoring.

David (24:00)
Mm-hmm.

Ilan (24:10)
in the eighties and nineties, but because of the level of automation in most manufacturing facilities, they were headed in this direction. actually, ⁓ reshoring those manufacturing operations is actually not going to bring those jobs back because of the level of automation in these facilities today. And maybe one thing that's scary if you're somebody who's hoping to work as a manager of a facility or something in that, in that world.

David (24:12)
Hmm.

Hmm.

Hmm.

Ilan (24:40)
is that potentially with these world models and the direction of AI tools, you may not even need a facility manager anymore, right? The whole automated plant can be run by an AI kind of sitting in the middle who the intricacies that go into it so that they can

David (24:57)
Mm-hmm.

Ilan (25:03)
⁓ apply complex reasoning to that process that's ongoing.

David (25:08)
That, ⁓ that's the, the, the dark factory, right? Where it's like, know, just the factory that runs by itself that you don't need any people there. ⁓ yeah, it, it's certainly been, ⁓ it's a, it's, it's been a dream, I think of, of some people and, and, ⁓ with the tech that we have today, it feels a lot closer.

Ilan (25:17)
Mm-hmm.

Definitely. ⁓ Now that being said, even the Genie 3 model, which is kind of like the cutting edge of these world models, as I said, it generates maybe a few seconds, maybe up to a minute or so of these worlds that are described. So let's just say that in terms of our computing capabilities, we might not be able to throw enough resources yet.

at these types of worlds, but we know that ⁓ computing goes up as an exponential, the level of processing goes up as an exponential. so, ⁓ give it a few months, maybe a year, and we'll be in a very different place than we are today.

David (26:12)
Mm-hmm.

I've got other news that's Google related. ⁓ So Genie being something that came out of Google, Google DeepMind, I believe. There's also news about Google's new image generating capabilities in Gemini. So this is something that when they were testing it out, it was called Nano Banana. So some people might've seen some of the memes, but it's live now.

in production. So let me go ahead and show you.

Okay, so here I am in Gemini. And so I've got the the pro ⁓ subscription here. And there you go. Right away, we have an in-app communication telling us that you ought to try Nano Banana. And so the way that you would do that is here under tools.

You would click to open that and choose to create images, which pleasantly has a banana there. Thank you. You're really leaning into the name of this. All right. So ⁓ there's a lot of things that this image generating ⁓ model can do. Let's go through some examples here.

Ilan (27:23)
You

David (27:35)
All right, so I've uploaded these two images, these two publicly available images of ⁓ Dr. Dre and Brad Pitt. And I've asked them to generate an image, a photo of these two people holding, sorry, these two people drinking bubble tea in downtown San Francisco. One is holding a taro milk tea. The other is drinking a honeydew milk tea. So let's go ahead and submit that and see how it goes. Now, ⁓ one of the things that...

is remarkable about Nano Banana is that it adheres to the original appearance of the person very well. That's one of the things that really stood out. I mean, look at that. That's not bad. Right. That was first of all, quite quick. And ⁓ they are holding roughly the correct color of bubble tea. And that looks like downtown San Fran to me. I mean, that look right to you, Ilan.

Ilan (28:28)
Looks right

to me, I see the cable car back there.

David (28:31)
Yeah. And so also, I mean, if you look at the faces of the people, it has adhered quite well to the original photos that I've provided. Granted that they are celebrities, but if you were to upload an image of somebody who might not be a celebrity, it stays with the image of that person quite well. And so that's one of the things that this model is quite good at.

Another thing... ⁓

Ilan (28:55)
I gotta say that the

speed of execution here is really great. You know, I talking about earlier how we expect things to happen in an instant in the modern internet and the LLMs have kind of detracted from that experience, right? They've moved us in the opposite direction. And I have to say that personally, having started to use Gemini more frequently myself for work, I've been really...

amazed at the ability of for Google to generate the kind of content that other models are able to do as well, but much faster.

David (29:34)
Yeah, especially with this example here where it was generating an image. mean, typically that would have taken, I don't know, like a minute or so. Whereas this was only a few seconds. So you're absolutely right. Speed here is a noticeable improvement as well. The other thing is that because it is something that you can continuously have a conversation with, you can continue to make modifications to it. So I could say something like,

Ilan (29:38)
Mm-hmm.

David (30:01)
like make both people wear tuxedos and turn the bubble tea into anime style. So the anime style aspect of it, if it does this right, it's going to ⁓ make only part of the picture anime style, which it didn't do. So nevermind that. it, and only Dr. Dre's wearing a tux.

But you know what, it's still, everything else stayed quite similar, right? ⁓ If you look at the streetcar in the background, their poses, everything is very similar and also with their faces. And this is a problem that other image-generating tools had previously.

Ilan (30:45)
Yeah, it is definitely. mean, even from generating the ⁓ logo for our podcast, this was one of the issues that we had where there was a point where the logo looked great in terms of the renderings of our faces that, ⁓ that we had provided from the headshots we'd provided. We didn't like the background and then wanted to change the background. And suddenly the renderings of our faces were not good, but the background was much better. ⁓ was very hard to keep part of the image static. And then.

allow the rest of the image to be edited. So it's really, really impressive what Google has done here with Nano Banana.

David (31:25)
Yeah. So go ahead and play with Nano Banana and, you know, curious to see what everybody comes up with.

Ilan (31:33)
Absolutely.

Well, with that, that's all the news that we've got for you this week. We'll be back with our regularly scheduled program next week with season two on becoming AI native product folks. And in the meantime, you can follow us on all the socials @pandcpodcast and

Leave us a review, leave us a comment, let us know what you like, what you dislike, what we could do better, what else you want to hear. We'd love to hear from you. Until then, have a good week.

David (32:06)
See you next time.

© 2025 Prompt and Circumstance