Artificial intelligence tools like ChatGPT and DALLยทE are rapidly transforming how we create images, tell stories, and even build entire comic books. In this episode of Today in Tech, Keith Shaw sits down with Michael Todascoโan AI advisor, creative technologist, and visiting fellow at San Diego State Universityโto examine the explosive growth of AI image generators and the big questions they raise.Tadasco shares real-world classroom experiences showing how fast AI models evolve, explains how new image generation features are unlocking new forms of creativity, and discusses the legal and ethical issues around AI-generated art styles like Studio Ghibli and Disney characters. The conversation also covers how AI is being used to make pitch decks, logo designs, and slide presentationsโsparking a debate about what jobs might be impacted next. ๐ Key topics in this episode: * The rapid evolution of AI image creation tools * Real classroom examples of model improvements * The viral Studio Ghibli trend and copyright concerns * Creating comics and slideshows with AI-generated visuals * Future creative careers in the age of AI ๐จ Whether you're a designer, writer, educator, or just curious about the future of creative work, this episode offers insights on where AI is heading, and what it means for human imagination. ๐ Subscribe for more episodes on the future of technology, innovation, and AI trends. #AIArt #ImageGeneration #MichaelTadasco #ChatGPT #CopyrightAI #TodayInTech #KeithShaw #CreativityAndAI #Dalle3 #OpenAI #TechTrends
Register Now
Keith Shaw: Continued advances in AI image creation tools have sparked a bit of a firestorm and backlash, with artists and big tech companies arguing over whatโs right and wrong in this space.
Meanwhile, enthusiasm from end users about these new capabilities could end up costing AI companies more money than they expected. We're going to check in to see how creative the technology has gotten on this episode of Today in Tech.
Keith Shaw: Michael Todasco, I had you on the show about eight months ago when we were discussing the world of AI creativityโwhether it was just about going beyond the magic tricks of image generation and similar tools.
In the last eight months, Iโve noticed that AI has gotten a lot better in the image creation space. Youโve been doing a bunch of creative experiments to gauge whatโs going on. So, from your perspectiveโhave you seen this improvement as well?
Michael Todasco: Yeah, let me give you a very specific example, Keith. Earlier this monthโso, Aprilโweโre recording this in AprilโI had two classes I was teaching: one on a Monday, one on a Wednesday.
On Monday, I gave a presentation covering image generation, its downsides, what it can and canโt doโall of that. Then on Tuesday, OpenAI announced ChatGPT-4o with image generation. By Wednesdayโs class, I had to completely update my presentation. The world had literally changed in 48 hours.
Two sections of the same class got very different versions. Thatโs a real-life example of how fast this is moving. The new image generation tool is amazing, and there's so much other incredible stuff out there as well. The pace is just... wild.
If you're in the early cohort of a class, you might end up missing something that the later cohort experiences.
Keith Shaw: Yeah, and that model update really took off online. We saw people generating images of themselves in the style of Studio Ghibliโthat was a big meme for a while, which brought about some issues weโll get into.
But we also started seeing people do what Iโd call โaction figureโ images. Have you seen that trend? It's like, โDraw yourself as an action figure,โ and it does so based on your previous interactions with ChatGPT. Because of the memory features, it already knows who you are.
I tried itโand I didnโt like how it looked, so I never posted the action figure of me. Iโve got to work on what it thinks I look like.
Michael Todasco: Well, that was actually one thing I didโnot as an action figure. I just said, โHey, ChatGPT...โ So for folks who donโt know, memoryโor infinite memory, I think theyโre calling itโwas another feature OpenAI announced. It means any chat you've had with it is now remembered.
So I went in and asked, not as an action figure, but just, โWhat do you think I look like?โ and โWhat do you think my family looks like?โ
It was really interesting to see the images it generated. I became a generic white guy with a beard and glasses in his 40s.
I posted that on LinkedIn and said, โHey, other white guys in their 40sโwhat are you getting?โ And sure enough, many of them were getting results that looked a lot like me. So clearly, thereโs an archetype built into the systemโcertain facial structures and all that.
It was relatively close, but not exactly me. I wouldโve been shocked if it was, because I donโt know how it could comprehend that.
Keith Shaw: Yeah, I think it knows what I look like because Iโve uploaded pictures of myself beforeโsaying things like, โDraw me as a podcast host,โ or, โDraw me flying a plane.โ So when I did the action figure, it probably just took the photoโme from the chest upโand filled in the blanks.
Thatโs what upset me about the result. It kind of told me I needed to lose more weight. Michael Todasco: Right?
All the things AI gets wrong. Iโs now become your judgmental parent.
Keith Shaw: This is kind of a tangent, but remember when Wii Fit came out on the Nintendo Wii? There was definitely a cultural clash, Japanese vs. American sensibilities.
Youโd stand on that little scale, and it would take your picture and basically say, โYeah, youโre obese.โ It had no qualms about it, no sugar-coating. Maybe AI is doing that now, too, drawing an image based on what it thinks you look like and not holding back.
Michael Todasco: That wouldnโt surprise me. A Japanese product being that direct? Yeah.
Keith Shaw: So, getting back to this new model with the Studio Ghibli stuffโit caused some issues. First, the animator himself, Miyazaki, got really upset about it for obvious reasons.
But on the other hand, users loved the creations so much that OpenAI's servers started getting overwhelmed. I think Sam Altman even had to come out and say they were experiencing delays because so many people were generating these images. Whatโs your take on that controversy?
Michael Todasco: Itโs brilliant marketing on their part. I donโt think they went in expecting the Studio Ghibli thing to catch fire, but when it didโSam Altman changed his profile picture to one of those images. They knew what they were doing.
I donโt know why it was Studio Ghibli, though. It couldโve been anythingโPixar, Disney princesses, whatever. There are probably dozens of styles the model could handle, especially early on. I think theyโve clamped down on things since then, and we could get into that too.
But you never know what will take off on the internet. People have been able to generate Studio Ghibliโstyle images in MidJourney and elsewhere for a while. But it couldnโt generate you as a Ghibli-style image quite like this new tool could.
Thatโs what really changed. It wasnโt just โgenerate a Ghibli imageโ; it was โmake me look like a Ghibli characterโโand it did a really good job at that. It couldโve been The Simpsons or anything else, but Ghibli won out.
I actually wrote about this: Studio Ghibliโs profits are only about $20 millionโnot bad, but relatively small. They're not a huge studio by any stretch.
So to see OpenAI gain all this value off a relatively small studio really makes you think. We need clear copyright laws in the U.S. I will say, thoughโin Japan, to the best of my knowledge, thatโs all totally legal.
Keith Shaw: So maybe thatโs why they chose that style? Because itโs legally easier to get away with? Michael Todasco: Maybe.
Again, Iโm not a copyright attorney, but Studio Ghibli would have a trademark in the U.S., which is different than in Japan. It all depends on jurisdiction.
In Japan, around 2019, they basically said you can train an AI model on anything that's on the open internet and you donโt need to compensate the copyright holders. They're one of the most open countries when it comes to AI training data.
Keith Shaw: It's probably why they went with that instead of trying something like Disney, which has an army of lawyers.
Michael Todasco: Interesting fact about DisneyโI wrote a piece about this recently. About a year ago, I found a list of the 40 most famous cartoon characters in America. I went into DALLยทE 3, ChatGPTโs image generator at the time, and went through them one by one.
I said things like, โCan you generate a picture of Donald Duck? Betty Boop? Fred Flintstone?โ And I noted the responses. Did it say no? Did it generate a workaround image that looked exactly like the character, just without naming it?
For example, Iโd ask for Donald Duck, and it would say, โI can't generate that,โ but then give me a duck wearing a sailor suitโit was Donald Duck without calling it that.
I repeated the experiment with the new image generation model just last week. This time, not a single Disney-licensed image was returnedโnot even the workaround. So clearly, theyโve put the clamps down on Disneyโs IP.
Some other IP holders? Not so much. The model seemed more lenient overall than it was a year ago. So theyโre clearly stricter with Disneyโthey know to avoid the mouse.
Keith Shaw: Obviously other models can still get away with more. I've seen people use Grokโthatโs the Elon Musk oneโand I donโt think they have guardrails at all.
Michael Todasco: Right, though I think itโs paid-only. IP guardrails there seem pretty light. MidJourney also feels pretty loose when it comes to that.
If you try something like this in Geminiโor whatever theyโre calling it nowโyouโre not going to get anything. I think Google clamped down the most.
I tried one of the Chinese video models about six months ago. I canโt even remember the name. I asked it to generate โMickey Mouse with a machine gun.โ And it did it. No hesitation.
So that tells you where theyโre at. They really donโt care. Even GrokโI donโt know if it would allow that. I havenโt tested it. I probably should.
I might go try Mickey Mouse with a machine gun on Grok. I assume itโll stop me eventually, but who knows?
Keith Shaw: I want to show you some of the imagesโthis is another reason I wanted to talk to youโjust to show how good this stuff has gotten.
One of my go-to prompts is trying to get the system to draw a crossover in my mind: ALF, the puppet from the '80s, visiting The Golden Girls.
So I typed in, โRemember that classic episode?โ Of course, it doesnโt exist. But hereโs what it gave me. The first one was just horrible. The eyes were wrong, and ALF looked like he was six feet tall. That mightโve been Sophia in the background.
So that was from 2023. Then I tried another oneโALF came out looking like a teddy bear. The white-haired woman sort of looked like Dorothy.
Then I did one when Firefly first came outโapparently, it picked Roseโbut ALF looked like a nightmare puppet from a horror movie. Like Chucky.
Okay, now prepare to have your socks blown off. Here's the one I did today. (holds up image)
Thatโs Dorothy.
Thatโs ALF. And theyโre sharing cheesecake in the kitchen. Itโs... yeah, the cheesecake.
Michael Todasco: The only minor complaint I have is that the cheesecake is backwards.
Keith Shaw: Maybe some people eat cheesecake backwards. You never know.
Michael Todasco: No way. People who think thatโs the proper way to eat cheesecakeโpost in the comments and be prepared for some hate.
Keith Shaw: Oh noโmy director, Chris, just waved at me. He says thatโs how he eats cheesecake. Michael Todasco: What?
Keith Shaw: Yeah.
Crust first. Michael Todasco: No!
Keith Shaw: He says weโre missing out. We should try it that way. Youโve done things like this beforeโI remember one of your prompts to AI was about the proper way to eat a burrito.
Michael Todasco: Okay, Keith, Iโm inspired. After this, Iโm going to go into all the image generation models and ask them to show people eating cheesecake. I want to see what percentage are eating it backwards vs. forwards.
Iโll also go into Claude and other models and ask, โWhatโs the right way to eat cheesecake?โ I need to find out how many say crust-first. I think Chris might be an AI.
Keith Shaw: There have been questions about that.
Michael Todasco: Thereโs no right or wrong way to eat something. You should know that as a human.
Keith Shaw: He says itโs like a new way of eating a Chipotle bowl.
Michael Todasco: Am I eating that wrong too? What am I missing?
Keith Shaw: I donโt know. Heโs signaling something... Oh, he said youโre supposed to flip the bowl upside down.
When they give you the bowl, the aluminum top is on the top. Youโre supposed to flip it over.
Michael Todasco: Iโve never seen that. Maybe check that with your models too.
Keith Shaw: Weโve gotten way off track, so let me bring us back.
One of the projects that fascinated me was your experiment: Can AI write a comic book? The first couple of attempts werenโt great, but with this latest model, it seems to be understanding moreโespecially the use of words and letters.
What takeaways did you have with the latest model in your comic book test?
Michael Todasco: Just to put it in perspectiveโthe first time I ran this experiment was in 2022, before ChatGPT even existed. I used GPT-3 to write the comic book scriptโit was able to do that.
So I took that six-page script and used MidJourney to generate panel images. I had to prompt each panel individually, pick the best ones, go into Comic Life to lay them out, and do a lot of manual work.
There was a lot of human judgment required, and on top of that, there was no character consistency. One panel the character was skinny and gray, then in another, he was plump and blackโit was all over the place.
And the story wasnโt great either. That was 2022. I tried again a few times over the past year, each time with marginal improvements.
Now, with ChatGPT-4o and the new image generation tool, itโs a totally different rendering method. If you use MidJourney, you see a diffusion processโthe image starts as noise and gradually becomes clear.
ChatGPTโs model is different. Itโs almost like a 3D printer or laser printer: top-down rendering. You see the image clarify from top to bottom, rather than all at once.
That process allows it to actually render words and letters legibly nowโwhich is huge for comics.
So now I can say: โChatGPT, I want a four-page comic book. Write the script, and then generate page one, two, three, and four.โ It does that.
And it gets the word bubbles right. It gets the layout. I donโt have to design the pages anymore.
The only problem is character consistency between pages. Within a single page? Itโs fine. But between pagesโeven within the same sessionโit changes.
But theyโve figured it out within one image. Believe meโwithin three months, theyโll figure it out across images. Youโll be able to generate 22 pages of a comic with consistent characters and story.
Keith Shaw: Have you tried rewriting the story?
Michael Todasco: I intentionally donโt. The reality isโitโs not a great writer yet. Itโs adequate. Like a good high schoolโlevel writer. And I donโt mean that as an insult.
But when people buy a book or a comic, theyโre expecting something more polished. This is okay for a first draft. Itโs a decent framework you can improve upon.
So yesโit can write a story, but itโs not good enough to replace something youโd get from, say, Marvel or a professional comic book writer.
You're not going to say, โThis is so good, I donโt need to read Fantastic Four anymore.โ No oneโs going to do that. Not yet.
Keith Shaw: It still feels like, with a lot of the creative stuff, it leans heavily on tropes.
And thatโs because itโs trained on so much content that already existsโbooks, TV, movies, comics. So you start noticing that everything feels a bit... familiar.
When I did a D&D backstory, for example, it felt like every other generic character Iโve seen for that class.
Michael Todasco: But that said, Keith, hereโs what I think we should do. There was an ALF comic book back in the day. But to my knowledge, thereโs never been a Golden Girls comic.
So once this tech is capable, we should generate an ALF comic with Golden Girls as guest stars. Weโll evaluate it and maybe even make it a third appearance on your show.
Keith Shaw: Thatโs my weekend project. Iโll storyboard it or plot it out. Iโll prompt it with something like, โWrite an episode of Golden Girls where ALF visits and causes a conflict.โ Letโs see what it does.
Michael Todasco: Was this with the new model or the old one?
Keith Shaw: This was with the old model and the football helmets.
I havenโt tried it lately with the new one, but even in your comic exampleโwhich used the new modelโthe text was good but still slightly off.
Maybe we should just call AI slightly off.
Michael Todasco: Yeah, it would do things like spell one of 40 words on a page incorrectly.
Like, no one misspells what as hwatโbut it would.
If you tell it to rerun, it canโt just correct that one word.
They donโt have spot editing in ChatGPT yet. MidJourney has it.
Some other tools do, too.
But I donโt think ChatGPTโs new image model has that yet.
Keith Shaw: The old one tried itโand it didnโt work very well.
Michael Todasco: Yeah, thatโs probably why itโs not included right now.
But prompt adherence is improving.
Iโve started making presentation slides using image generation.
For example, Iโm presenting in San Diego next weekโand Iโm using ChatGPT to create most of the visuals.
Iโll ask it for an Art Decoโstyle slide with a big โAgendaโ header and five bullet pointsโit does that really well.
Michael Todasco: Think about that from a creativity standpoint. You can customize images for every slide.
In another presentationโan AI class for accountantsโI thought of those old-school green visors they used to wear.
So I generated a robot wearing a green visor and made it the mascot for the deck.
Every slide had this robot in a different poseโpresenting bullet points, sitting at a desk, whatever.
I uploaded the image each time and instructed the model to reuse it in new contexts.
Thatโs the level of personalization image generation is starting to unlock.
Keith Shaw: Yeah, everyone was focused on Studio Ghibli, but this tech has much more potential.
Some of the other new features that came out with the update havenโt even been explored yet.
Things like memory, better word rendering, and customizationโI think more people should play with those.
Should there be any concern from the other sideโbeyond IP issues?
Michael Todasco: We need to figure out what copyright laws we want in the U.S.โJapan is very open, Europe less so.
As for concerns: if youโre a designer, you should be using this. You can be more efficient, more productive.
But if your whole job is to make PowerPoint templates, that role may not exist in five years.
Keith Shaw: I had to make a pitch recently, and Iโm like a PowerPoint 0.5 on a 1โ100 scale.
So I had ChatGPT write the pitch, generate the script, and design the slides.
I wrote the outline in Word, fed it to ChatGPT, and it did the rest. There was still some manual tweaking.
But as a novice, I loved it. If that were my full-time job thoughโIโd be a little nervous.
Michael Todasco: Whatever your job is, you should be using these tools. Understand what they can do, how they can help you and your clients.
If you broke your day into 15-minute chunksโemail, meetings, decksโsee which parts AI can help with.
If 80% of your day can be done better or faster with AI, thatโs something to think about.
Weโre not there yet for most jobsโbut itโs coming. Then the question becomes: where do humans still add value?
When students ask what job they should prepare for, I always say: solopreneur.
One human, a bunch of AI tools, building something real for human customersโand being the human in the loop when needed. Keith Shaw: Wow.
Solopreneurโdid you coin that?
Michael Todasco: Iโm sure I heard it somewhere else.
Thereโs a Google toolโNgram Viewerโthat shows when words were used historically. I use it when writing period pieces.
You can see if a word existed in the 1930s or notโitโs really helpful.
Keith Shaw: Iโm writing that one down.
Michael Todasco: Bonus tip!
Keith Shaw: Got any other projects youโre working onโsomething fascinating or terrifying?
Michael Todasco: Honestly, cheesecake has consumed my brain.
But seriously, thatโs one thing LLMs still donโt getโthey havenโt lived.
Like when my daughter was nine, sheโd eat pizza by tunneling through the middleโsauce everywhere.
Adults learn to eat around the sides. AI hasnโt eaten a burrito. The internet doesnโt teach that nuance.
If I started an AI company, Iโd record human behaviors at an ice cream shop, then sell that training data.
Thatโs the missing linkโactual human behavior, not internet performance.
Even experts like Yann LeCun and Fei-Fei Li are moving on from LLMs to world models.
Keith Shaw: Is that why Metaโs making those glassesโto record real-world data from users?
Michael Todasco: I donโt think itโs the primary reasonโbut itโs definitely a secondary benefit.
Keith Shaw: Iโm not wearing a tinfoil hat... yet.
Michael Todasco: No, but if Meta offered you $200/month to wear those glasses, plus upgrade your internetโyouโd probably say yes.
Imagine doing that for 10,000 people worldwide. Thatโs real data. Thatโs how we train better AI.
Keith Shaw: Thatโs my new solopreneur jobโrecord my life, send in the footage.
Michael Todasco: That will be a real job. "Just live life, and wear these glasses." The new TaskRabbit.
Keith Shaw: All right, is there a clear leader in creative AI right now? Or should people still try different models?
Michael Todasco: It depends. If youโre coding and not using Cursor, Gemini 2.5 is very strong.
We did an in-class coding challengeโstudents using Gemini outperformed others.
For writing, I prefer Claude Sonnet. But GPT-4o is now quite strong at analyzing and improving drafts.
Honestly, for most usersโit doesnโt matter. Use what your company offers.
These tools are all really good. The differences are at the margins.
Keith Shaw: Mike, thanks again for joining us. Weโve got to get working on that Golden Girls/ALF comic book.
And donโt forgetโyouโve got cheesecake research to conduct.
Michael Todasco: Iโm spending the rest of my day obsessed with cheesecake.
Keith Shaw: Weโll report back.
Michael Todasco: Sounds good. Always a pleasure.
Keith Shaw: Thatโs going to do it for this weekโs episode. Be sure to like the video, subscribe to the channel, and leave your comments below. Join us every week for new episodes of Today in Tech. Iโm Keith Shaw, thanks for watching. ย ย
Sponsored Links