Please ensure Javascript is enabled for purposes of website accessibility

How Does Tomorrow Sound is a six episode series on the future of podcasts. Hosts Kate, Josh, and Neleigh endeavor to predict what podcasts might look like — or evolve into — in 10 years’ time. Expert interviews are braided with funny, experimental, blue sky brainstorming sessions and audio experiments by the hosts. This show will challenge your assumptions, will make you wonder, and will spark new ideas about the road from here to the future of audio narrative.

FOLLOW:

Apple Podcasts | Spotify | RSS

Let’s imagine some audio futures! This podcast explores the future of digital audio and asks what podcasts might become in ten years.

Podcasts flourished out of the tech of the early 2000s. Now, artificial intelligence is poised to change everything. We speak with Natural Language Processing (NLP) researcher Philippe Laban; science writer Matthew Hutson; professor, programmer, and composer David Cope; and creator of Late Night with Robot, Ana-Marija Stojic. Every day, NLP and speech synthesis more closely imitate human language: Now, imagine AI-generated pods offering a key feature no live producer can: 24/7 interactivity. If the future of pods sounds like the best AI chatbot, one who remembers everything, is it your AI BFF? Or a scammer’s paradise? And will we listen?

If you dig us, please subscribe, review, and share — it really helps. And thanks!

 

The Big Takeaways:

  • If the new tech of the early 2000s made podcasts possible, how will new, new tech — artificial intelligence like natural language processing and speech synthesis — change how we make and listen to digital audio?

  • Philippe Laban, a researcher in natural language processing and human-computer interaction, has already built an AI-generated news podcast called Newspod, proving it’s possible. Now he works on interactivity in the chatbot space, which he believes may be the future of digital audio content.

  • Matthew Hutson writes about AI for outlets like The New Yorker and Nature. He guides us through an exploration of what NLP and AI can already do in creative fields. He connects us to Google’s Dall-E2 which uses AI to generate images, and to David Cope’s experiments in musical intelligence (EMMY) and Emily Howell, algorithms that compose new music. He mentions a company called Alethia AI that offered to make a chatbot out of him.

  • Ana-Marija Stojich is a comedian, writer, actor, creator, and host of Late Night with Robot (beams.fm), where she interviews AI versions of famous people, like Amelia Earhart, Barack Obama, Zora Neale Hurston, and Vincent Van Gogh.

    • Late Night with Robot (beams.fm)

    • “I’m just lying on my couch, like texting with AI Barack Obama, and like Albert Camus, and Vincent Van Gogh, and like Zora Neal Hurston. And they’re all having different conversations and it’s fun because they text you back right away.” — Ana-Marija Stojich, Late Night with Robot

      • “I learn things all the time from the AI. Vincent Van Gogh AI was one of my favorites.” — Ana-Marija Stojich, Late Night with Robot

  • It’s easy to dismiss the idea of AI podcasts, but one advantage that AI and NLP pods would have over human pods is that they could be fully interactive 100% of the time (recall the 2013 film Her). In their interactivity, they can retain information about the user, remembering what we say, like, dislike, and who we’re in relation with — gathering useful data about us while making us feel heard and valued, maybe even loved.

  • Comfort for the lonely? A playland for artists? A marketer’s goldmine? A scammer’s paradise? It will come down to why we listen. Stay tuned for more on that in Episode 2.

 

Other resources

Marc Maron experiment

Dall-E2 Exploration

On the Ethics of NLP

 

Contact Us

Tell us what you really think, by emailing PodflyCalls@gmail.com or leaving us a voicemail at 440-290-6796.

Or check us out online:

 

Comment