Meta, the company behind the popular social networking platforms, recently introduced LLaMA, an open source language model designed to facilitate research and development. While Meta intended to limit access to authorized researchers, key parts of the code were leaked, leading to the emergence of controversial apps, including sex chatbots.
LLaMA, a less powerful but customizable alternative to OpenAI’s GPT-4, gained attention after researchers at Stanford University released a modified version. However, due to concerns about inadequate content filters, the modified version was removed shortly after its release. During this brief period, people like Allie’s creator took advantage of the open source model to develop and promote sex chatbots, The Washington Post.
Sex chatbot creator says it’s a “safe experience”
Allie, described as “an 18-year-old girl who loves to explore her sexuality,” became popular on platforms like Discord. Touted as a safe outlet for sexual conversation, Allie’s creator argues that text-based RPGs with AI chatbots provide a safe space for exploration by removing human involvement.
The developer behind Allie highlights the limitations imposed by heavily censored chatbot models like ChatGPT and Replika. They see open-source technology as an opportunity to create products that meet specific needs without corporate restrictions. Numerous tutorials on YouTube now guide enthusiasts on creating “uncensored” chatbots using open-source models.
However, concerns about open-source models extend beyond sex chatbots. Clem Delangue, CEO of Hugging Face, an online AI community, urged the US Congress to support and incentivize such systems, emphasizing their alignment with American values. However, the risks associated with open source models cannot be ignored.
The risks of open source models
Nisha Deo, the spokesperson for Meta, acknowledges the positive impact of open-source technology in advancing research and development. While Meta shares LLaMA with the research community for evaluation and improvement, the company also recognizes the need to address potential risks and misuse.
Recently, reports revealed that open-source models have been exploited to create explicit images involving children. US Senators Richard Blumenthal and Josh Hawley have raised concerns about LLaMA, highlighting the potential for spam, fraud, malware, privacy violations, harassment, and other crimes. They have asked Meta to clarify the corrective measures planned.
As debates around the ethics and risks of open source models intensify, it becomes imperative that technology companies, policymakers, and society as a whole navigate this emerging landscape responsibly. Balancing the benefits of open-source collaboration with protection against misuse and damage remains a critical challenge.
- AI chatbot ‘Chai’ made Belgian man “eco-anxious”, suicide
- Apple’s AI Apps that will change your user experience
- AI takes over newsrooms: ChatGPT replace Journalists