Digital Citizen

Digital Citizen

Fastmail

Live your best digital life with Fastmail. Subscribe to Digital Citizen and listen to Fastmail CTO Ricardo Signes talk to great thinkers about the digital world. Learn how to be a more responsible digital citizen and make the Internet a better place.

Categories: Technology

Listen to the last episode:

Join us on a journey to learn more about the intersection of linguistics and AI with special guest Emily M. Bender. Come with us as we learn how linguistics functions in modern language models like ChatGPT.

Episode Notes

Discover the origins of language models, the negative implications of sourcing data to train these technologies, and the value of authenticity.

▶️ Guest Interview - Emily M. Bender

🗣️ Discussion Points
  • Emily M. Bender is a Professor of Linguistics at the University of Washington. Her work focuses on grammar, engineering, and the societal impacts of language technology. She's spoken and written about what it means to make informed decisions about AI and large language models such as ChatGPT.
  • Artificial Intelligence (AI) is a marketing term developed in the 1950s by John McCarthy. It refers to an area of computer science. AI is a technology built using natural language processing and linguistics, the science of how language works. Understanding how language works is necessary to comprehend large language models' potential misuse and limitations.
  • Language model is the term for a type of technology designed to model the distribution of word forms in text. While early language models simply determined the relative frequency of words in a text, today’s language models are bigger in terms of the data they store and the language they are trained on. As a society, we must continue reminding ourselves that synthetic text is not a credible information source. Before sharing information, it’s smart to verify that something was written by a human rather than a machine. Valuing authenticity and citations are some of the most important things we can do.
  • Distributional biases are generated in the data output used for large language models. The less care we put into curating training data, the more various patterns and systems of oppression will be reproduced, regardless of whether they are presented as fact or fiction in the end result. 
  • Being a good digital citizen means avoiding using products built on data theft and labor exploitation. On an individual level, we should insist on transparency regarding synthetic media. Part of the problem is that there is currently no watermarking at the source. There is a major need for regulation and accountability around synthetic text nationally. We can also continue to increase the value of authenticity.
🔵 Find Us 💙 Review Us

If you love this show, please leave us a review on Apple Podcasts or wherever you listen to podcasts. Take our survey to tell us what you think at digitalcitizenshow.com/survey.

Previous episodes

  • 24 - Exploring AI with Emily M. Bender 
    Tue, 25 Jun 2024 - 0h
  • 23 - From Players to Creators: Diving into the Video Game Industry 
    Tue, 11 Jun 2024 - 0h
  • 22 - You Can Thrive Here: Local Leaders on Philly’s Move Into Ethical Tech 
    Tue, 28 May 2024 - 0h
  • 21 - Repairing Our Right To Fix it with Aaron Perzanowski 
    Tue, 14 May 2024 - 0h
  • 20 - Getting Things Done Using Your Calendar with David Tedaldi from Morgen 
    Tue, 30 Apr 2024 - 0h
Show more episodes

More Kenya technology podcasts

More international technology podcasts

Choose podcast genre