Snapchat want to use artificial intelligence to help users have a conversation. Should parents be worried? Shelly Palmer, Professor of Advanced Media at Syracuse University’s Newhouse School of Public Communications, and Morgan Wright, Chief Security Advisor for SentinelOne, explore the potential risks of Snapchat’s new feature, My AI, which is powered by the popular AI chatbot tool, ChatGPT. Parents have expressed concerns about this feature, as it allows users to customize their avatars and engage in conversations with others, making it unclear if they are interacting with a computer or a human.
Shelly Palmer highlights the alignment problem in AI, where the intentions of the AI may not match the desired outcomes. He believes that introducing this technology in Snapchat without proper safeguards is irresponsible and dangerous for children. Both experts agree that while AI has many potential benefits, it is crucial to understand and manage the ethical implications and potential risks associated with its use.
The discussion also touches upon the use of AI in the workforce, where AI-driven productivity enhancements are creating new winners and losers. The panelists warn that parents, governments, and society need to be more proactive in understanding and addressing the ethical and security concerns related to AI, including the development of lethal autonomous weapon systems by countries like China and Russia. Original Airdate: April 28, 2023
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.