Learn, Make, Learn

Generative AI x Product with Anmol Anubhai

July 18, 2024 Ernest Kim, Joachim Groeger Season 1 Episode 19

Generative AI: step change or snake oil du jour? We enter this mire with Anmol Anubhai as our guide & insight into the future of GenAI x product creation as our destination. Join us for a refreshingly grounded conversation on a topic that’s typically full of hype.

FOLLOW-UPS – 01:42
The Hyundai Ioniq 5 N Manual Mode Changes Performance EVs Forever
WTF is a Product Manager?
Dare Obasanjo on Product Management at Figma Config 2024
Dare Obasanjo on Threads
Shreyas Doshi on Threads
Shreyas on why product management is hard

GenAI x PRODUCT WITH ANMOL ANUBHAI – 07:45
Anmol on LinkedIn
Meet Figma AI
Shane Allen on Figma AI
Mira Murati: Conversation with Dartmouth Engineering
Why Apple’s iPad Ad Fell Flat
Figma’s AI app creator accused of ripping off Apple weather app
Adobe’s new terms of service say it won’t use your work to train AI
Instagram is training AI on your data
CHI 2024: Evaluating Human-AI Partnership for LLM-based Code Migration
Why companies are turning to ‘citizen developers’

ANMOL’S ADVICE TO COMPANIES – 32:08

AI INTERFACES BEYOND THE BOT – 34:45
Google: People + AI Guidebook

CULTIVATING MENTAL MODELS – 42:37

WHO’S DOING THIS WELL & WHERE’S IT HEADED? – 48:12
AI brings soaring emissions for Google and Microsoft

WEEKLY RECS – 56:44
Anmol: Andrew Ng’s Machine Learning Collection & Amazon PartyRock
Joachim: BOOX Palma
Ernest: Earthworks Audio ETHOS microphone

CLOSING & PREVIEW – 01:08:48

****

Rant, rave or otherwise via email at LearnMakeLearn@gmail.com or on Threads @LearnMakeLearnShow.

CREDITS
Theme: Vendla / Today Is a Good Day / courtesy of www.epidemicsound.com
Drum hit: PREL / Musical Element 85 / courtesy of www.epidemicsound.com

Ernest:

Hello and welcome to Learn Make Learn where we share qualitative and quantitative perspectives on products to help you make better. My name is Ernest Kim and I'm joined by my friend and co host Joachim Groeger. Hey Joachim, how's it going?

Joachim:

I'm good. I'm hot. It's too hot. This is, I'm, should we just pretend that the heat is the reason why we can't have more regular episode uploads? We're just pretending that our audio equipment is failing. Yeah, no, it was a very hot week. It was lovely, but quite hot. But, um, yeah, we're all doing very well, and enjoying the summer overall. What about you, Ernest?

Ernest:

Yeah, I mean, I, I have to acknowledge that I have AC, um, our central AC in our place, so I'm a little spoiled.

Joachim:

Yeah,

Ernest:

Did it hit, uh, did it get over 100 degrees Fahrenheit in Seattle as well?

Joachim:

I'm not sure if we crossed the a hundred threshold. Maybe we did actually. Certainly felt like we did.

Ernest:

Yeah. Yeah, I think we got to 105 on Tuesday here in Portland, so it was pretty bad.

Joachim:

Yeah. Must've sucked. Must've sucked really bad, huh, Ernest, with all that AC pumping out.

Ernest:

Yeah, watching those news reports about how hot it was. All right. Well, this is episode 19. And today we're going to discuss the intersection of generative AI and product creation. We're With our first ever guest Anmol Anubhai. Now we've touched on the topic of generative AI in previous episodes, but really only in passing. So we're very excited to have the opportunity to discuss it in depth with Anmol, who actually works in the field, unlike us, she's an expert. Uh, and we'll dive into that conversation in just a minute, but let's start with some follow ups. Joachim, do you have any follow ups to our previous episode on the weight of history or any episodes prior?

Joachim:

Yeah, I have one very, very tiny follow up. Um, and I think it was in the episode, um, The Weight of History. We were discussing, um, Hyundai's IONIQ 5, the N version. And, um, we had, we had gotten very fanboy about the whole concept and the people involved in that project. And the fact that. They had created a fake manual gearbox for an electric car, which doesn't really make sense, but added to the adventure and enjoyment of the vehicle. Now, a friend of mine who is also a listener, uh, Jay Conheim, was very, very upset with our treatment of the Ioniq 5's manual gearbox. He was deeply offended because he pointed out, and rightly so, that the EV6 GT, Kia's, uh, car, which is built on the same platform. And is there a high performance electric vehicle has a similar feature and had it well before the Hyundai Ioniq 5. And, yes, he was so upset. He immediately texted me. Um, so I have to apologize to him. I have to apologize to all other Kia drivers. Yes, there is a version of that technology in the Kia EV6., If anyone has access to both of those cars, please let us know, um, if they are comparable manual gearbox experience is an electric cars, but just, yeah, a correction on. Hyundai is not the first. Kia have a version of this as well in their car and, uh, apologies to everyone out there for that misrepresentation.

Ernest:

I think the thing I'm most excited about is that one of our episodes evoked an emotional response in a listener. That's awesome.

Joachim:

Yeah, I mean, we, better than no response, right? It's like, can be super negative, super positive, neutral. We do not want, we need, we need people to be angry. That's, that's better than

Ernest:

Yeah, that's a absolutely for those folks listening. Please do send us your feedback at learn, make, learn at gmail. com. We want to hear from you, uh, including corrections. Um,

Joachim:

Yes.

Ernest:

I have two followups today and they both connect back to our WTF is a product manager episode from way back in February. Uh, the first is a presentation by Dare Obasanjo, who is the lead product manager at Meta where he leads the team responsible for the in app browser and key ad platform technologies for Facebook. Messenger and Instagram. So vitally important products with lots and lots of users. Uh, just a few weeks ago in late June, Dare presented at the Figma config conference, and we'll actually reference the Figma config conference again. When we get to our main topic, but coming back to Dare, the title of this presentation was product management, half art, half science. All passion, but really what he talked about was what makes a good product manager. So for the folks in the audience who are interested in product management, I think his talk is well worth a watch. Thankfully it's available on YouTube and we'll include a link to it in our show notes. I'll also include a link to Dare's profile on threads where he's a prolific poster with the majority of his posts focused on product management. Now speaking of threads, my second followup is also based on that platform and it's the account of Shreyas Doshi, who's been a product manager and led product management at pretty much every high profile tech company you can think of, including Yahoo, Google, Amazon. and Stripe. Shreyas now advises founders and executives and also coaches product managers. And he very generously shares a lot of great insight around product management via his threads account. And again, we'll provide links to all of this in the show notes. In one recent example, he encapsulated the product management role in the space of just one post. Of course it was very high level, uh, but I think he managed to capture the essence of the role remarkably well, and in well under 500 characters. My favorite of Shreyas's posts, uh, threads, sorry, is, uh, one in which over the course of 15 posts, he explains why product management is hard. I'll quote just four of those 15. Shreyas writes, quote, sometimes you should build what users say they want. Other times you shouldn't. Sometimes you should aim for pixel perfect product. Other times good enough is good enough. And sometimes launch a flawed product. Sometimes you should persist until your product is successful. Other times you should pivot. And sometimes the right call is to sunset it. This is why product management is hard. This is why it's so fun. For some people at least. It is also why good product management matters. And as a product management geek, I just love that, you know, I've practiced product management for over a decade and I've not seen anyone capture the essence of the role more cogently than Shreyas. So we'll include a link to his account along with the two specific threads I've referenced.

Joachim:

I love that. That's such a great, it actually is a great summary also of our conversations. I feel that we're always flitting between these two poles on every conversation, because of this is clear cut. It's such a complex mess of all kinds of forces pulling you in different directions, and I guess we always try and enumerate as many of them as possible and sit in that tension, right? I guess that's exactly where you are. That's everything's pulling at the same time and it's all happening at the same time. That's such a great summary. Perfect. Perfectly summarizes the complexity of this.

Ernest:

Exactly. That the answer is that it's complicated and you know, people might not want to hear that, but that's the truth of it. And that's it. Just like you said, that's the fun of it as well. Um, all right. So we're going to dive into our main topic, our interview with, uh, our conversation really with Anmol Anubhai. And with that, I'm going to pass it off now to Past Joachim to introduce the segment.

Joachim:

All right, well, let's move on to our main topic and we're excited about this because we have our first guest on the podcast. And so let's just jump right into it. we're going to have a conversation today with Anmol Anubhai on the intersection of generative AI and product creation. And to be very, very clear, and this applies to all of us on this podcast, the views expressed here are ours and hours alone and do not reflect the views of our employers. And of course, this extends to Anmol as well. We're here just as a bunch of professionals chitchatting around the campfire and discussing our thoughts on things in general. So we don't represent any major corporations here. Anmol, welcome to the podcast. Welcome to Learn, Make, Learn. To be our first guest. How would you like to introduce yourself to the listeners?

Anmol:

It's such an honor to be here and talking about, product design with the both of you. so to tell you in brief about myself and what I've been up to in these last couple of years, right? So, um, I would call myself a user or human advocate in the domain of product shaping, like AI product shaping. Um, I've had the good fortune of, working with teams at Google AI and then Uber for a little bit and now Amazon Web Services for the last four years, uh, all incredible teams trying to, uh, You know, shape human first, human centered products, and my job has been to aid these teams with qualitative and quantitative data, that comes from in field research around what are the concerns, fears, myths, some of them are not completely invalid. So how can we truly. Build systems that users can trust. That has been my key focus in these last years.

Joachim:

That's super interesting. It's such a different angle from what we generally hear about the AI domain right now, especially in this, let's be honest, it's kind of a hype time right now with AI. And so having these words like trust and concern coming through, and what you're describing is pretty, um, Well, it's heartening to hear that this is actually ongoing work and this is happening right now. Something that you touched on, I just wanted to bounce off very briefly and then we'll go deeper into the larger conversation. The legibility of these AI systems. These large language models are, as everyone likes to throw around, black boxes. They're very complicated. how do you feel about that? Does your work touch on trying to make these systems legible so that we can actually trust them and we're able to audit exactly why it is they do something the way they do?

Anmol:

I think that's a great question, right? across all our different studies, um, with many of these companies, even what, one thing that, you know, has been true is the fact that all our customers, they want back end visibility into how are these models, even, you Uh, landing on the conclusions, making the decisions that they're making. Um, nobody here is signing up for magic. That's the gist in a nutshell, right? Like, uh, people might be wowed by it initially for a few minutes here and there, but then when it comes to actually adopting these systems, uh, making it a part of their life or even work, uh, they want to understand. What is it doing on the back end, even if there are no experts, say, in the field of AI/ML. So that is definitely there. And that has been our push as the design team to which is how do we provide that back end visibility without making it overwhelming as an experience for our end users?

Joachim:

Awesome. before we dive into the deeper conversation with all of us here, Ernest, you had a couple of thoughts that you wanted to use to set up the conversation.

Ernest:

Oh yeah, no, I appreciate it. Thanks Joachim. And you know, we do want this to be a largely free form conversation, but I just thought it might help to put a few stakes in the ground to start And I just don't think we can have a meaningful conversation about generative AI and product creation without highlighting some of the key concerns voiced by particularly people in the making community. And for starters, people who work in digital product creation, especially designers, are almost certainly aware of the concerns that came out of the aforementioned Figma config conference this past June for the benefit of anyone who's not familiar with Figma. It's an online collaborative interface design platform that's become very popular as a prototyping tool for folks designing websites and mobile apps. Um, imagine a version of, uh, Google Slides created specifically for people making websites and apps and you'll get a pretty good sense for what Figma does. The source of these concerns was a new suite of features launched by Figma called Figma AI, quoting their announcement. We're excited to introduce Figma AI. A collection of features designed to help you work more efficiently and creatively. We often talk about the blank canvas problem, when you're faced with a new Figma file and don't know where to start. The new Make Designs feature will generate UI layouts and component options from your text prompts. Just describe what you need and the feature will provide you with a first draft. By helping you get ideas down quickly, this feature enables you to explore various design directions and arrive at a solution faster. Unquote. Figma AI was introduced at the company's config conference in a session led by a product manager named Cemre Gungor. And just a quick side note, the video of that session has since been removed from Figma's YouTube channel, but we'll share a link to an in depth post by Figma documenting the new features. Now this sparked a couple of concerns that I think will help set up our conversation with Anmol. The first, which is a broadly held concern associated with AI is job loss. In this case amongst interface and UX designers. Speaking to this, an interface designer named Shane Allen posted to threads with the following in response to GemRay's presentation. Quote, a product manager showing designers how to use AI and Figma. While the irony isn't lost on me, the reality is this will change the designer to product manager dynamic forever. Unquote. Asked to elaborate on what he meant, Shane wrote, quote, Just make it like this, says the product manager, as they show a designer and AI mock up in Figma that they've already asked engineering to build, unquote. So clearly the elimination of entire classes of jobs, including white collar jobs that have thus far been insulated from the steady march of automation, is a big and growing concern. And then gasoline was poured on this fire when footage of a talk by OpenAI CTO Mira Murati surfaced that same week, in which she said, quote, Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place, unquote. Now, I should note that the coverage of this has been hugely unfair in that Murati's quote was very much taken out of context, and we'll provide a link to the full video of her comment so that you can judge for yourself. But the response to Murati's words, this new Figma AI feature, and Apple's Crush ad from a little while back show that the broader fears of job loss stemming from generative AI have very much extended into the world of creatives and makers. Now, the second concern, which is closely related, is rooted in intellectual property. And again, Figma's new AI tool was at the center of the storm. Quoting from a Guardian article, Quote, the new AI tool launched by Figma allows users to enter a plain English description of an app they want to create and watch as the user interface is generated out of thin air. But shortly after it launched, Andy Allen, founder of iOS app developer, not boring software, discovered that multiple requests to design a weather app repeatedly resulted in a program that was almost identical to the built in iOS weather app that appears on Apple devices. This is a weather app using the new make designs feature. And the results are basically Apple's weather app. Alan said in a post on X this week, try it three times, same results. Unquote. We'll include a link to the Guardian article and show notes. It includes Alan's tweets, which lay bare just how strikingly similar Figma, uh, Figma's AI generated weather app prototypes are to Apple's built in weather app. Uh, and there are two big issues here. First, as Alan points out. Use of this feature could put app designers in legal jeopardy as he wrote on X quote, just a heads up to any designers using the new Figma make designs feature that you may want to thoroughly check existing apps or modify the results heavily so that you don't unknowingly land yourself in legal trouble unquote, because if you were to unwittingly use Figma AI to create an app to happen to mirror the appearance of an existing app, the developer of that existing app could take you to court for copyright infringement. Now, as troubling as that could be for an indie designer, the even bigger concern is around ownership. This is a massive oversimplification and Anmol, please don't hesitate to jump in if I'm getting any of this wrong, but generative AI platforms work by analyzing huge pools of data, text in the case of text generators, images in the case of image generators, um, to uncover the underlying patterns in that data, and this is called training, then building from the patterns identified through that training process, the platform is able to generate new content based on a prompt from a user, for example, make a weather app. Now, As we discussed in our episode titled the perils of crossing over from niche to mainstream, a big issue here is that with very few exceptions, the people who created the content used to train these generative AI platforms, content without which these platforms could not exist, are not compensated. And this becomes doubly problematic when a creation platform uses its own users data to train its generative AI models, in effect, charging creators for their own demise. Now, to be fair, Figma denies that they've done this. Returning to that Guardian article I referenced earlier, quote, Figma's chief executive, Dylan Field, posted a defense of the company's feature. Despite appearances, Field said, the tool was not created by training an AI system on work done using the Figma app by other customers. Instead, the service used off the shelf large language models to instruct a more hand coded design system, unquote. End quote. But at least in my opinion, this is really a difference of degree and not of kind. Fundamentally, any LLM based generative AI system is using patterns underlying existing works to generate, quote unquote, new works based on those patterns. And Figma is hardly the only creative company to come under fire for this. As noted in an article from the verge, uh, Adobe faced intense backlash over its terms of service agreement and had to announce a tweak version that makes it clear that Adobe will not train AI on user content stored locally or in the cloud. And many longtime Instagram users were enraged to learn that meta has started to mine users, Instagram images and videos. To train its AI models as a a filmmaker and screenwriter wrote in a piece for fast company, just when you think that meta had already committed every imaginable wrongdoing, the company has pulled more garbage out of its clown cars, trunk by mining user creations for its own AI. Meta is effectively killing Instagram's spirit while flipping the ultimate finger to all Instagrammers, especially those who joined the social network back when it was an independent playground for creativity and self expression. Alright, so clearly people in the business of making feel very strongly about generative AI. They're worried about appropriation of their work, devaluation of their work, and the seemingly existential threat posed to their livelihoods by generative AI. Now, Anmol, I know this is quite a lot to lay on you, but do you think these concerns are warranted? And you know, as clear as these risks are, do you see potential upsides of generative AI for people in the business of making products?

Anmol:

thanks for this question on this. I think this is such a such an important discussion to be had, right? Um, I think where I would like to start is by asking ourselves the question, uh, what is even creativity? Right? What are these concepts, creativity, productivity? I feel like, and this is my personal take, but I've also seen a lot of creatives in the past couple of years voice their opinion on this and have the same take, which is creativity. Creativity, you fundamentally need to be human. And need to have had certain lived experiences to be able to, bring that story, that individuality to whatever it is that you create or make, right? That is what, uh, resonates with your audience or with someone who is, uh, a part of it. Say if it's like a co creation exercise. So to hope and pray that like an AI model. It's going to start making art by simply, say, supervised learning, just like how you were saying, right, if the model has access to a lot of data, the internet, and then after simply, say, going through all of it, using permutation, combination, or some logic, it's making something. The question to ask ourselves as a society is, would we even categorize that or call that art in the first place or any kind of a creative output? from research what we found, so we also wrote this paper, um, On human AI partnerships. Um, and again, this was with two of my wonderful authors. So Ishani and Behrooz and what we learned, we did a lot of these interviews with developers, even besides this study, we have done so many extensive interviews with the developer community, because in this case, we are making tools for AI led and one thing that we learned is that Um, users want actually more time and energy back to be able to be creative themselves and they don't want to do certain really boring, repetitive tasks. And I think that is where product shapers can play a role. if you understand what it means to create value versus just focusing on quote unquote productivity. You know, what does it mean to empower your employees so that they get that time energy back to focus on value creation, on true creativity. If you come from that route, then I think you're actually going to end up making use of AI in really meaningful and powerful ways. Because let's face it, you know, there are so many tasks, like for example, in a developer's life, this entire paper is on code migration, which is simply taking like a legacy code base, looking at the language and converting it to a different language. Now that is a very mechanical, arduous task. Uh, developers are also creators. in some sense and they want to actually solve real customer problems is what they were telling us throughout, right? Nobody wants to do this. So if we see AI as a tool that is here to sort of partner with humans and take away the boring repetitive work from their lives so that they get to do the more joyful creative work, then I think we are on the right route. But if we see AI, just like how you're saying, as this director, and as humans simply being there in the system, then I think we're approaching it in an entirely incorrect manner, because I don't think our models are even there yet to be able to make those types of nuanced decisions.

Joachim:

That's a super interesting starting point for this whole conversation especially when you started talking about creativity, the process of creating is really the magical part. It's not really the outputs. The outputs are kind of nice and a good thing to have. And usually if you're working in a company, that's how you get judged is what the output is. But the process really changes the maker. and so when you talk about having generative AI handle some of those menial tasks do you think if we keep getting rid of those menial tasks, we maybe are losing some of that process that is changing the individual and maybe it is the fuel that actually feeds the creativity? Um, Are we going to lose a little bit of that with the generative AI thing or is there a way to keep that going?

Anmol:

I think. I haven't, to be very honest, I didn't think of it that way. That's interesting, what you point out, right? Which is when you are also doing some of those menial tasks, you are Some part of your brain is thinking about the system, maybe, or the other complexities, and then that also becomes a part of your process when you're actually, say, solving the final problem. Um, that's a really good way of looking at it. Sure, we are losing that, to be honest, if we approach it the way I was describing earlier, right? But at the same time, Um, the pro that I see there is the fact that we are also inviting, uh, many types of creators and leveling the playground by that what I mean is right. Like there are so many folks out there who are now calling themselves citizen developers. Because they don't need to learn programming. They don't need to be computer science experts, uh, small businesses who had a lot of different ideas, but so far they did not know how to maybe say, hire the right developer to make it happen for them. Now, these folks, uh, they always had a knack for problem solving, but they have these, skill sets at their fingertips. And they themselves are now actually also building solutions on their own and the excitement of it, right? To be able to make something on your own without having to spend time learning things like a C programming language or Java or whatnot, right? Um, so that is, maybe I'm being an optimist here, but the way I see it is that we're inviting some of these business users. to also start making use of these technical products. And very soon we might ask ourselves the question, what it even means to be quote unquote technical, right? Like everyone is a creator in some way, I feel strongly believe in that. So I feel like in some sense, if we use these tools, if we use AI correctly, we will, we might just end up empowering, uh, creators coming from all kinds of diverse backgrounds. To jump in and start making, to start problem solving.

Joachim:

Yeah, I love that. The optimism. I think this is where I like to see AI optimism. I think this is exactly the domain that makes the most sense because it is, as you said, removing barriers to entry means that now more ideas can flourish and you can have meaningful conversation. And I think usually, I think Ernest, in your setup, you mentioned kind of the, the, the product manager who just says, Hey, make it look like this. And it becomes this directive ordering approach. so all of that to say is like, I don't think it's the technology itself. That's to It's the organizational structure. Right.

Anmol:

No, I just wanted to add one quick thing that Ernest also pointed out. And I thought that was such a fantastic example, right? That the PM said, Hey, you go make it this way. And then finally the model ended up spitting, um, almost like another version of the AI, um, Like of the Apple weather app, right? It ended up looking exactly similar. Um, I feel this is exactly why you cannot make products that are end to end AI led. You need human AI partnerships. You know, the folks who are say experts in the domain, say designers in this case, right? They're still going to be the folks who are going to lead the effort. And AI is probably going to be a helping hand in the process. Uh, that's one thing that we see again and again in our studies too, which is these humans, they know things just based on institutional knowledge that sometimes you cannot find for the life of you on internet or somewhere else. It's because of their experience. Um, and as a product shaper, you have to respect that. You have to make systems that, uh, I'm going to partner with these humans instead of just, you know, go off and do something on its own because that's never going to give you what you exactly want, um, as the end user or the stakeholder. Yeah. Yeah.

Ernest:

I I love that just the fundamental approach and also we'll include a link to Anmol's paper in the show notes as well. But I just love the approach you took to that work where, you know, the assumption you wanted to test was this idea of partnership versus replacement. And I was just curious what, what led you to that, uh, looking at that question versus, you know, most people I think are looking at replacement.

Anmol:

A lot of folks played a crucial role, but I have to say that, uh, my manager back then, so she is this amazing product director. And she really encouraged me to also study concepts like productivity and impact on productivity of Gen AI tooling, et cetera. And, um, when I started even looking at papers from a bunch of different companies, organizations across the board on this, um, we realized that Number one, we are approaching this incorrectly, which is, you know, a lot of companies today, they feel like, okay, if you're going to say, adopt an AI led something, we are going to be able to get rid of X number of people or employees. Um, but. that will never happen for now. And I think even like my personal take, maybe it's controversial, but I don't see it happening in the future too. I'll tell you why, because number one, exactly like what we were describing, right? Like you need those experts to be leading the effort. But I think These organizations without someone to tell them if the output is even okay or not, or if it will meet a certain bar or standard or not, they're not going to want to adopt it. So sure, they might want to try it in a sandbox kind of environment. But then when it comes to actually incorporating AI into existing workflows, right, that is a very high stakes decision. So you really want to be sure about certain things like accuracy, precision, what we were describing earlier, which is the back end visibility. What is the logic here? Uh, they don't want a black box. That's one thing that we were hearing across the board, which is why we were like, okay, if you don't want a black box, what do you want? And then just like how you can see in the paper, a lot of developers are talking about, uh, we want, um, an AI tool that acts like a peer, you know, that I'm going to work with together. And that way, the whole experience, I think it's very clever to do this because as an AI product shaper. You also automatically meet systems that will be more easily forgiven by the end user. You know, they are not expecting it to come out with the hundred on hundred output or response. They're expecting it to partner with them, you know, throw a bunch of different ideas and for them as the humans in the system. In the loop, as they call it, human in the loop to discuss and decide whether something even makes sense or not, and then to build on it together. So it's a win win kind of, you know, you don't have to work towards that perfect hundred on hundred end to end AI system. Nobody's asking for it.

Ernest:

I think that's great. I also loved what you said about, you know, this focus on value over productivity. Cause I, I do feel like that's what's driving so much of the excitement right now around AI at a C suite level is this, visions of incredible productivity without really thinking about what's the actual value we're getting out of here. Um, How would you try to persuade someone to, you know, shift their thinking away from just being so productivity focused and, you know, thinking more broadly about value?

Anmol:

yeah, absolutely. So I think, this again is sort of tied to what we were discussing, uh, a little bit earlier, about creativity and, uh, design, for example, art also in many ways, right? Like as a company, my personal advice is to a lot of these companies, small, big, large. Um, ultimately. You want, folks or solutions that are out of the box, that are cutting edge, novel, right? almost sort of like art, like extremely creative solutions to difficult problems. Uh, you need humans to be able to do that. Simply put, I don't think any AI model is going to do that for you without any guidance. And as a responsible employer, you want to empower. these folks, your humans to be able to be their best creative versions. So, um, instead of productivity, if you focus on value creation, then you can do really smart things like maybe end to end study your existing workflows, processes, identify the bottlenecks, identify the menial, not so fun work, which might also be leading to churn. you losing some of those gems in your organization, and then shaping AI systems or bringing a flavor of AI to focus on those bottlenecks. Because actually you know, a lot of experts will also tell you this, but. Um, AI systems do well on small scoped out tasks, right? They're still not able to handle very fuzzy big problems, but if you have a small scoped out task, it's probably going to do its job then. So I think it's both again, clever and responsible to get AI to handle some of those bottlenecks while you free up. The bandwidth, uh, give back those cycles to your employees to focus on real problems on. Um, be more creative, maybe take a few more risks because they have the time and the energy to work through it.

Joachim:

All of that is, I think resonates with all of us here in this conversation, because it really does put that. The focus on kind of the, the special magic that humans still have and you use the technology to augment that and expand what it is that's possible. I think that's where we've seen technology have the most impact in human societies is that it expands our ability to do everything. So what is the best way to get the AI to interface with us? Cause right now it's very much. Chat GPT is, the one in the public's consciousness. and that is as the name suggests, a chat bot, but if I'm writing code or I'm trying to problem solve, it doesn't feel great to just see code appear out of nowhere. Have you put any thought. Not necessarily even where you are right now, but just privately, what does the great interface look like for something like a system like this? Let's say I'm a developer and do you have any like best practices of ways of thinking about what would make a good interface for that type of person, for example?

Anmol:

Yeah, I think that's, again, such a fantastic question because I've, also had these discussions and debates with so many other designers, throughout, like across, uh, in Seattle, uh, in the Bay Area, um, even outside of Amazon, but then they are all having this conversation exactly right, which is not everyone wants to be talking to a bot. Right, like there are folks who are introverted often, ambiverted too, talking seems strange as an experience. If I want something, why would I want to talk to, to a bot, however accurate or great? So the way I see it is that, uh, it will not remain just a, you know, chat or a natural language processing NLP kind of experience. The way I see it is that maybe our UIs will become more adaptive. By that, what I mean is, something like organic user interface, right? So I remember reading about this concept many years ago, which is, um, OUI. which is when a user interacts with an interface, the interface learns about the user and shapes itself accordingly. And I think that is the direction which is going to be really exciting to explore, right? Because these models, they can retain the history, they can understand a user so much so that it sort of tailors itself accordingly. So, you don't have one, uh, experience or one solution for everyone across the board, but more tailored, meaningful experiences.

Joachim:

That's super interesting. Yeah. That immediately got my brain thinking about a very different Like the simplest version in my mind is like auto complete, but a natural auto complete that's not irritating, but feels like an extension where I am and what I'm thinking. Not clippy. Not that, not Microsoft's paperclip, but um,

Anmol:

one quick thing to add to it. It's, a very interesting discussion, right? Because there are also cons. I'll give you an example. suppose you use an app, any app, maybe it's your Microsoft Word, and then there are just certain features that you use all the time, and a lot of the other features that you never use, for whatever odd reason, maybe they're not as relevant for your work, the OUI or the intelligent UI will actually Figure out a way to be able to make those features that you use often more discoverable, easy, accessible while you're working, right? And maybe hide the rest, make for cleaner, less overwhelming experiences. However, there is a slight caveat to this. Um, so I was working at the Seattle Times, uh, while I was in grad school here at UW and We made like a similar intelligent reading experience for young readers using fuzzy clustering, which is a kind of an AI algorithm. However, um, a lot of young readers, they were super skeptical about this because they were like, wait, I'm only getting a lot of what I am already into. It's not like there's no way to switch out of this, similar to what we often also hear about social media apps and whatnot. Right. So what if you want to try being a different person? What if you have grown? What, like, how will the model, the experience keep up? It's also a very interesting question that I think designers, researchers will have to answer at some point. Yeah, absolutely. Yeah. And

Joachim:

Oh, that's so interesting, because that kind of opens up, in my mind, when you said the way you phrased it as they want to try on being a different person, it feels like It would almost be like, well, you've now formed these clusters around the types of behaviors that, create this cohort and that cohort and that cohort. But if you're very transparent about that and say, by the way, you're in cohort A, there's also all these other ones out there. Do you want to try one of those on for size that it, it, there's a, again, we've become so accustomed to obscuring those things. Maybe if there's agency in how you then are able to break free from the persona that you've been given and you can try something else out, that would give you enough of the sense of freedom that you can still explore other things, but really interesting example.

Anmol:

Um, you know, our fantastic technical writers, uh, because we need them to actually help us break down these concepts even and make them more digestible for our end users. Um, not every time is user at all different level or whatever, actually be able to understand something like cluster or what exactly do you mean? Right? So, um, I think that's another big question right now, which is how do we make these concepts simple to understand without losing the nuances?

Ernest:

I think this is such an interesting question and it, it to me makes clear that interface design is going to be so important to addressing these sorts of questions in the years ahead. So the field you're in, Anmol, I think is going to be really exciting for, uh, quite a while to come. But, um, One very old school example that comes to mind for me is, you know, the traditional paper and newspaper that, uh, by way of the judgment of an editorial staff exposed you as a reader to a range of topics that you You know, may not have sought out yourself. Um, and I, I recall I'm old enough to have grown up reading paper newspapers. I really valued that. That was one of my favorite things about reading the New York Times. I grew up with the New York Times. I've been growing up in New York. That, Tuesday was science day, and I would get exposed to this content that I just wouldn't have been exposed to otherwise. And, to the point that you and Joachim were talking about, it does feel like we're missing that right now, where, you know, like you were saying, just. by virtue of the algorithm as it is today, just kind of kept in our own little bubbles. Um, do you, is, are you aware of anyone doing work in this space to, you know, because I guess that is the challenge of how do you set a KPI against that, right? Like, how do you say, um, we're doing this well or not in terms of exposing people to things that they're actually not seeking out?

Anmol:

Absolutely. I think, the one name and team that comes to mind is, um, actually Google AI UX. I also used to be a part of that team. Um, and Jess Holbrook with some of his, uh, amazing researcher colleagues wrote this guidebook called the People Plus AI Guidebook, uh, wherein he talks about the importance of providing end users with this visibility, transparency, how best maybe to go about it. Um, so I think that guidebook, um, in fact, it also got me thinking about so many of these questions that we are discussing today. I highly recommend, uh, that document. It's also available online as far as I remember, you can share a link.

Ernest:

Oh,

Joachim:

I was just going to reach to a different, related to this stuff, which is the other side of the feedback loop, and bringing it back to your paper, there was this, fundamental distrust that the developers had of the AI. I found that really interesting because some of them said, unit tests are no good. It's not enough to be able to trust what this thing has done. Um, I need to see a more holistic test. And I, I, that one struck me as really interesting because having worked with software engineers and also as myself coded and been in the system in a production system, no one does that. I found it interesting that these developers, when taken out of their standard developing environment, You give them a different interface, They start evaluating work differently. And I wondered, do you guys also follow that feedback of because I've interacted with an AI system, this now affects the way I work actually without the AI system, does that kind of feature somewhere where that funky weird feedback loop that, uh, pops up?

Anmol:

I think that's, uh, again, such a great question. I wish my colleague Ishani was here to answer it. I really miss having her here. Um, she also was like, we both partnered on this and she led several of these sessions and we had some great discussions about exactly this, right? Which is, are we seeing also a change in the mental model? I think one thing that we have to also acknowledge and maybe show compassion for our own selves as AI product shapers is the fact to just embrace the fact that it is genuinely hard to also shape AI experiences and products right now. And one key reason is that these customers, to your point, even in the beginning, they have absolutely no mental model. for what to expect or where to start, right? So, uh, it's a very different ballgame. It's not like designing a solution for something that you have seen firsthand in real life or you have some reference point for. Um, so it's more like shaping something for folks who are still themselves forming their opinion and shaping their mental model about the product or service in question. So in this case about the testing thing, um, I think one thing we realized is that it's like they started seeing AI as a teammate. They talked about how during code review, if they were, say, reviewing their team members code, right, then they would, of course, offer like very critical real feedback, but then also be forgiving when they need to and, uh, wait for them or, in this case, the model to get better over time with their feedback. So I think, um, more than a very different way of working, the flavor they're bringing is that you're seeing AI as a peer or a teammate. I don't think they are changing the way they are working, but that is how they're approaching it. Does that answer your question? Mm

Joachim:

Yeah, it does. And it just, It makes me just, my mind just go off into all kinds of crazy directions. Because you've put this human in the loop, you have to first give them adjustment time. They need a mental model, as you said, of the thing that they're interacting with in order to get the benefit of that thing. which is so interesting because that's very anthropomorphizing, right? We're turning them into a peer. I think back to Ernest's point at the beginning of the C suite, just seeing productivity numbers, what they might actually be in more interested in and should be focusing on, which is hard to measure, doesn't have an obvious KPI is really how is this affecting our. People and what are they doing differently? How are they responding to these things? And is there some other magic there that we haven't quite been able to understand?

Anmol:

No, I was just going to say that I really like your point on anthropomorphism, right? Like, are we trying to. Do that here. And where are we going? Essentially, I think that's a big question that is left to be answered by researchers, even product shapers. Um, and I don't think I myself have an answer for that yet. Right? Because, uh, there is so much debate around that too, that we must acknowledge the fact that so many folks, so many researchers even talk about how, uh, human beings, um, Having that kind of a human like relationship or that expectation with a machine, an AI model, is that even healthy for the user? Is that responsible? So that is also another question that many researchers, product shapers are asking. But then when we do these types of studies, it's really interesting to see that the end customer almost starts thinking of it as a peer. Right. So where do we draw the line? This is just my hypothesis, but I feel like, that NLP kind of experience, you know, it's talking to you. Are we already sort of setting a certain expectation and a stage for that to happen is the question I am asking myself, what if it were not? a conversational experience, will that maybe not lead to them having these kind of expectations that they generally have of a peer? Um, that's, I think something that I am actively thinking about now these days. Yeah,

Ernest:

was curious, actually, you know, you've referenced a couple of examples, um, a couple of resources that have been useful for you. Is anyone doing it well right now? You know, I know everyone's on the AI, Gen AI bandwagon. Would you kind of point to anyone and say, okay, yeah, they're actually doing some interesting things and maybe heading in a more interesting direction than the majority of folks? Okay.

Anmol:

that's a good question. I would say. Especially going back to your point on productivity and KPIs, right? One framework that I personally found really interesting was from Slack. They came up with the SPACE framework. The customer research team came up with that and I don't think they touch upon AI specifically, but Um, it was a very holistic framework overall that also touches upon some of these points that we talked. Not to value generation specifically, but it was more holistic and better than just say measuring velocity or amount of work done. It was not dehumanizing, I thought.

Ernest:

That's a great example. Um, and I, you know, I know this is always a tough one, but there's so much froth around this right now. People think it's going to change the world, destroy humanity, or, you know, do nothing at all. Where do you think this will be in, say five years from now? Uh, you know, do you think it's going to actually be a part of all of our lives or is it all overhyped?

Anmol:

I think, um, I continue to maintain, right? Like, I would say I am, um, if I were to describe, like, myself and my opinion after especially also doing this research, I think it's also wrong to be a complete pessimist and think that, okay, maybe it's going to, you know, be dangerous and be fearful. I don't think one should be fearful. This technology, it's like, sure, there are a lot of unanswered questions. Uh, certain aspects of it. Um, So I think it's very important for us as product shapers, right? Um, whatever role you might be in. I think you can actually, this is the time to, uh, be vocal. And to join the conversation and to help, uh, your team, your leadership make responsible decisions and really small ways, even right. So instead of being fearful, I think. The way we should approach it is that there is a lot of potential. And now the question that is left to be answered is how best to tap into it, right? For work that just like how we discussed is not joy inducing, is not, uh, something that people look forward to so that we can free up their time to do more joyful work, or maybe even just go back home and spend time with family, right? Like make us more human, uh, if possible. If you will. So that's how I see it. Yeah.

Joachim:

Okay, let's go a little bit spicy then. Asking the bigger question of, where is this heading and is this heading in the right direction? You've addressed that question. I wanted to ask the same question in a different angle, which is there are known physical consequences to these systems, right? I'm thinking about water usage, energy usage, all of the crazy environmental stuff that we're dealing with right now. we're seeing Microsoft blasting through its carbon quotas and all of these things. And,, as I'm using Copilot or something like that, that's never really part of my experience. Right. Do you think there's a role in that domain as well? Because we've talked about transparency of the actual AI system, the way it's generating its answers. But do you believe that there's a space for also People in the HCI community to be thinking about showing the consequences of those types of interactions with AI on the real world in terms of, all of the environmental impacts I was talking about, and how can you do that without distracting and just making someone feel sad as opposed to actually making them feel empowered to do something differently or think about what they're doing? Where do you land on that one?

Anmol:

I think, um, I myself, I'll be completely honest here, right? Like I've been so neck deep in tech and user research, what not. I always heard a lot about sustainability. For example, I, um, saw that concept being thrown around a couple of times in meetings, et cetera. Um, however, it was only when I was talking to, you know, my best friend who is an advisor in the sustainability domain, that I was able to also ask some really 101 kind of questions, right? So I think it is so important. I even tell this to myself to be curious and it's okay if you have really basic 101 questions, but to actually start asking them. I think so many of us, And including myself, I still don't fully know a lot of these concepts, to be honest. So it's so important to continue to be curious in this domain, right? Just like how I was telling you a little while ago about how our customers don't seem to have a mental model yet, and we are sort of shaping on the fly with them while their expectations, mental models get formed. Similarly, this is also true for us as shapers, as stakeholders in this domain. It's like, If you're not curious, if you're not asking the right questions, if we say do not even give the sustainability experts a seat at the table, uh, it's going to be hard to actually get to the bottom of it. Um, so my mantra is always, leave it to the experts and be curious yourself and get an expert involved, right? Earlier on, uh, seek their opinion, ask those questions.

Joachim:

I love Because we brought in an expert to talk about exactly this thing. So we're, we're living, we're living the act of doing this. Um, yeah, thank you for that.

Ernest:

I'm just echoing Joachim's point. This is an awesome conversation. I was just curious before we wrap up, did you have any kind of closing thoughts you want to share about the intersection of generative AI and product creation? Any kind of last things maybe that we didn't touch on that you want to highlight?

Anmol:

I think, um, it's so interesting where we are at right now and of course, I don't want to, I want to be sure that I make everyone feel heard and seen. There are certain aspects that can be worrying and folks are fearful. I completely understand. But at the same time, You know, I think going back to what I was, maybe I'm just repeating myself at this point, but I feel like there are so many really, um, exciting opportunities too, right? So just like everything else in life, I feel like it all boils down to how we choose to approach it and how thoughtful we choose to be at every point. And to just be sure we don't do one thing, which is dehumanize our fellow peers or our employees. It's very important to, um, take pride in how human we, we all are and how that is just so very special. And I don't think any model, however accurate, precise, or amazing is going to be able to replace that, right? There is something special here that we bring to the table. So it's very important to just remember that and to focus on building the best tool for us so that we can be our best versions is the way I see it. Yeah,

Ernest:

Oh, that's fantastic. Those are great words to end on. Um, all right. Well, now that you've heard our perspectives, we want to hear from you. If you work in product creation, are you excited about generative AI or does it keep you up at night? Where do you think it could help? And you know, what do you think are the biggest pitfalls to its adoption? Please share your thoughts with us at learn, make, learn at gmail. com. Now, let's move on to our recommendations of the week, and we want to include Anmol on this as well. Anmol, is there a product or service or article that you'd like to praise or pan for our listeners?

Anmol:

there are many, um, I think to name a few, I would say going back to what I was saying, right? Um, one of my, uh, favorite design managers, Dave Brown, he also talks AWS. And I think it's important, which is, uh, the fact that. Um, as product shapers, it's sometimes. It's really easy to like, it's a slippery slope and you sometimes if you don't know the nuances or you don't understand the concepts, you might end up making certain assumptions that are false. Right? So it's very important to educate yourself about certain AI really simple basic ones to start out with fundamentals. Um, there is this amazing Coursera course series, uh, by the very knowledgeable Andrew Wang. Um, on the fundamentals off machine learning. So for someone who is just looking to get started, I highly recommend that course. That's a great one. Um, there's another platform that AWS actually put out. It's called Party Rock and it's another fun platform. Even if you don't need to really be a developer or a, um, you know, or being a technical role at all, you can just get started, have fun with all types of models and see what works for you. what kind of solutions you might like to shape with AI models. So that's another platform that I recommend. There are a couple of other articles that we also talked about. So the people plus AI guidebook from Google, that's another great resource. So yeah, those are the ones that I wanted to share.

Ernest:

Those are awesome, and we'll include links to all these in the show notes as well. Thanks for that, Anmol. How about you, Joachim? Anything you want to share this week?

Joachim:

Yeah, I actually for once have a real product to think I've, I don't think I've ever had a product to recommend. So anyway, um, my recommendation is the Boox Palma mobile e paper reader, um, which feels like such a ridiculous luxury item to have another e book reader. Um, and you know, it costs 280, so it's not cheap. I did try and give it a proper, a rigorous testing. And the main things that, always bother me about technologies is if they require the cloud and accounts and all kinds of extra hoops that you have to jump through to, to make this work. So my test for this device was how quickly can I take a book from my, device that I already have and. EPUB or a PDF file and get it onto that e reader. What does that look like? Once you get it, you have to connect it to your wifi to get any of the benefits. You can of course also, uh, hook, it has an internal drive, so you can put a micro SD card in it. So if you want to be really, really old school with no network, you can do that and keep that thing off every network and just load everything onto an SD card and then pop it into that. so that was a little bit too inconvenient for you, but it does have Transfer program that basically opens it up to all other devices on your Wi Fi network, and then you can just drag and drop in a browser onto it. So I was surprised. I was really blown away by how easy it was. I didn't really dig into the hardware aspects that much, but, It basically looks like a cell phone. It looks like an Android cell phone, but with a really high resolution e ink display. Um, it has the same form factor as a phone. It's pretty slim. I maybe to some people, the point of reading is to have a device that feels different from a phone and you want to project that you're intelligent and read things. And so that's why you'd like to have a Kindle, which is a slightly different format. I didn't really mind having something that looks like I'm on my phone because, uh, The screen is really great. It's big enough and you can, um, and you can upload stuff so quickly. Yeah. It's been pretty easy to set up and I really like having this and my library with me. It is annoyingly good because it looks exactly like a phone. There's a reason why the phone form factor is so compelling. It is great for holding. They just nailed it. They understood that that is probably that sweet spot for a lot of people, as opposed to the Kindle, which is slightly wider, takes up a little bit more space, is a little bit bulkier, this can slip in your pocket just like your cell phone, and then you have an e ingredient, and it's pretty damn good, and the beauty as well is It runs Android. You don't have to log into your Google account to use this device. I am only using the wifi. I'm sure there's data that's being transferred in the background, other than that. It can be a full device, with all of the typical stuff, but because it's an E Ink display, it's not fast enough at rendering the image. So browsing is incredibly frustrating because when you scroll it, you know, as if you've used an E Ink reader, it flashes a little bit and then the next image is there. And so you can change the settings to get it to go faster and it degrades the quality a little bit, but it's. It's such a, it's just enough friction where you go, no, this will always be a reading device. So despite it having a browser and a voice recording, all of the things that you'd expect from an Android device, those things don't really come into play. Well done, they got me. I'm not returning it. I'm holding on to it a little bit longer. And yeah, if you want a device that is pretty hassle free and, depending on where you get your books from, if you're not necessarily always getting them from legitimate sources, this is a very good reader for that purpose as well. No questions asked. They let you upload anything. that's my recommendation for this week.

Ernest:

That's so great. I'm so glad to hear your take on it. I've been really intrigued by the BOOX devices as well. So it's really great to hear your firsthand account of it. Um, uh, actually my recommendation this week is also a physical product, which is pretty rare for both of us to be sharing physical products. Um, and it's also directly related, related to the podcast. Um, you may have heard the unkind expression that so and so has a face for radio. Well, I've come to learn that unfortunately, I have a voice for text, my voice is low and breathy and, uh, kind of difficult to capture in a way that's easy for listeners to make out. so in hopes of improving on this in Joachim and I have been podcasting, I've tried several microphones, none of which really did much to improve the situation. But, um, after reading many, many reviews, I decided to give one last microphone a try. It's from a company called Earthworks Audio, and it's their Ethos Broadcast Condenser Microphone. It, um, originally retailed for 699, and that was well outside of my budget. But it recently came down to 399, so I decided to give it a go, and I was, uh, very happily surprised by the results. There's You know, really only so much that any microphone can do, but at least to my ears, the ethos was able to capture my voice with a clarity that none of the other mics I tried could match. So I was very excited. But, um, then while editing the first episode where I used the ethos, I noticed some weird issues in the recording. And you may have heard this too. If you listen to those episodes, uh, there were. Some periodic dropouts and brief moments of static, you know, only a handful over the course of an hour plus recording. And I was able to edit around most of them. So hopefully you won't hear them, but no matter how good it sounded, you know, it just wasn't tenable to use a microphone that might drop out at a key moment in a recording. Um, I spent a lot of time troubleshooting to see if the problem might be somewhere else in the chain of gear that I used to record, but I was able to isolate the ethos as the source of the issue. And so I was pretty distraught, you know, because I had purchased the ETHOS through a third party retailer on Amazon. Um, it was new in box, but I worried that between Amazon, the retailer and Earthworks, everybody would just kind of point the finger at each other when it came to a service claim, kind of like that Spider Man animation of the Spider Man pointing to each other. But, um, thankfully it turned out that I didn't have anything to worry about. Um, I started by contacting earthworks, which is, uh, they're based in New Hampshire and they were remarkably responsive. They really just asked me for a few bits of info. And then based on that info, uh, they asked me to send them the microphone to look at for closer analysis. And then I think within a day or two of them receiving the mic, they told me that, uh, Based on their review, they'd replaced my mic. They said it was faulty and they'd replace it with a new one. And that's what I'm using right now to record this episode. Um, there's actually, uh, there's a fair bit of research showing that customers who experience a problem with a product that's effectively resolved, come away with greater affinity for that product and its parent brand than customers who'd never had a problem at all. Uh, we spent a lot of time focused on this, um, in my old 37signals days. And we came up with this concept that we called contingency design or design for when things go wrong. I know, you know, Joachim and I, uh, we've talked about this a bit in past episodes. The fact that the design of so many modern systems is so brittle, which causes them to fail spectacularly. Um, well. You know, I'm glad to say that earthworks audio isn't brittle and the support experience they provided me is really made me a big fan of the brand. Um, now, you know, hopefully I haven't just jinxed myself, but as of now, I'd, I'd highly recommend the earthworks ethos microphone and honestly, any of their products, because they've demonstrated that they really do stand behind them. So the earthworks audio ethos microphone is my recommendation of the week.

Joachim:

I was going to add, I had one tidbit that I found really interesting about Earthworks Earthworks are really innovative because they keep thinking about microphone technology and also in ways that are surprising. So they do these drum microphones that are designed to only capture overhead sounds. And they have a setup that is basically a little bit behind the drummer, a little bit above the drummer and a kick drum mic. So you have three signals going in. And. The recording setup that you get from that is really, really awesome. Um, you get a lot of air from the drum kit. Now, most. drum kits when they get recorded, they have mics on each drum, each individual drum close mic, and then each signal gets fed into the recording. It makes everything sound really closed and a little bit compact, but with this setup, the kit is allowed to move all of the air and they're capturing all of that. And it's kind of incredible. I mean, back in the day, that's how A lot of the coolest Zeppelin drum sounds come from just capturing the air moving above the drum kit. but I just thought it was so interesting that a company that is thoroughly modern decided to go in a very different direction with their, their design to go to the earlier days of recordings where drums were allowed to reverberate in a room and they would capture that. So it's nice to hear that they also have a great customer experience, but I always loved this idea that they were trying to do something that was a little bit I don't know, old school, you know, and of the drum sound in a different way. So that I always liked that idea from them.

Ernest:

That's such a cool example. It kind of, to me at least, connects back to what Anmol was talking about in terms of humans focusing on creativity. You I don't know that an algorithm would have gone to that sort of a solution. Um, but yeah, that's such a cool example. Um, alright, well, I think that does

Anmol:

just

Ernest:

it Yeah, yeah.

Anmol:

I was just going to say one thing, which is, you were saying your voice is perfect for

Ernest:

Text.

Anmol:

to text. I think your voice is perfect for ASMR. Like, oh my god, it is so calming. You immediately feel relaxed. Yeah, so I have to say, it is very ASMR.

Ernest:

you're, you're too kind. The problem is it puts people to sleep is, is the problem.

Anmol:

No,

Ernest:

Oh, well, I appreciate it. And thanks. Thank you for that. But also thank you once again for joining us as our first ever guest Where can listeners follow you or find more of your work?

Anmol:

they can connect with me on LinkedIn. Um, I did have an Instagram, but I've been off it because I was very much into scrolling endlessly. So I just got myself off it.

Ernest:

Oh, for you. I'm definitely available on LinkedIn, and then to those listening as well, thank you for joining us here at learn, make, learn. As we mentioned, we want to hear from you. So please send any questions or feedback to learn, make, learn at gmail. com and tell your friends about us. Now, we usually like to preview the topic for our next episode, but in all honesty, with summer in full swing and life offering up some curve balls, we're not entirely sure what we're going to address in our next episode. You may have also noticed that the publication cadence for episodes has become rather, how should I say, irregular. Uh, and that's probably going to continue through the summer, but I want to reassure you that Joachim and I are going to continue to bring you new episodes and new guests. So we thank you for your patience and hope you'll continue to join us at Learn Make Learn.

People on this episode