Grok AI Data Privacy Controls: X Users Can Opt Out Of Training Musk’s Chatbot
]
X, the social media platform owned by tech billionaire Elon Musk, has introduced a feature allowing users to opt out of contributing their content and activity to the training of Grok, the company’s artificial intelligence AI chatbot. By default, user data is included in Grok’s training process, but X has now provided an option for users to exclude their information, says a report by news agency Bloomberg.
X Users Get More Control
The company announced this new setting on their platform, stating that all users can now control whether their public posts are used in Grok’s development. Currently, this option is accessible through the web version of X, with plans to expand it to mobile devices in the near future. For those seeking additional privacy measures, X also highlighted that users can prevent Grok from accessing their data by switching their accounts to private mode, the report added.
Also read: Oppo Reno 12 5G Review: Excelling In The Mid-Range Segment
To recall, in November last year, Elon Musk’s artificial intelligence company xAI launched Grok, an AI-powered conversational agent. This chatbot, designed to compete with OpenAI’s ChatGPT, is currently accessible to X’s premium subscribers. Grok’s development has relied heavily on data from the X platform.
Musk has not been shy about discussing the use of X’s data in Grok’s training process. The AI assistant has demonstrated its capabilities by leveraging X posts to provide summaries of current events and respond to inquiries with up-to-date information.
Earlier in May, Musk-owned X is announced it was introducing an array of exciting new AI-powered features to enhance user experience. An official announcement from the X engineering team, shared via the X handle @XEng, said that premium subscribers will benefit from a comprehensive summary of posts associated with each trending story on X. These curated stories will be conveniently accessible under the ‘For You’ tab within the ‘Explore’ section of the platform.
X’s approach to AI training using user data is not new in the social media landscape. Recently, Meta, the parent company of Facebook and Instagram, faced similar issues. In a move that sparked controversy, Meta informed its users in the European Union (EU) and the United Kingdom about a planned policy change.
This change would have permitted the company to leverage public posts and content from both Facebook and Instagram for AI training purposes.
However, Meta’s initiative met with significant pushback from regulatory bodies. Faced with mounting pressure from these authorities, the company decided to temporarily halt its plans. This incident highlights the ongoing debates and challenges surrounding the use of user-generated content for AI development across various social media platforms.
Elon Musk is using your tweets to train his startup’s AI
]
Many AI companies run into the pervasive data issue. Free data on the internet is running out to train AI models. AI bot-makers like OpenAI, Google, and Microsoft have found themselves in the crosshairs of newspapers, authors, and visual artists for using their content to train their artificial intelligence tools.
A chipmaker that works with Nvidia just saw its highest profit in 6 years CC Share Subtitles Off
English view video A chipmaker that works with Nvidia just saw its highest profit in 6 years
A chipmaker that works with Nvidia just saw its highest profit in 6 years CC Share Subtitles Off
English A chipmaker that works with Nvidia just saw its highest profit in 6 years
Elon Musk, who owns an AI startup (xAI) that makes a chatbot called Grok, has the benefit of also owning a treasure trove of data: the social media site X (formerly Twitter). X changed its user settings so that their posts are automatically shared with xAI to train Grok.
“To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs and results with Grok for training and fine-tuning purposes,” the notice in X’s settings reads. “This also means that your interactions, inputs, and results may also be shared with our service provider xAI for these purposes.”
The move has already caught the attention of a European privacy watchdog, the Irish Data Protection Commission (DPC), which told TechCrunch Friday that it was “surprised” by the development.
The startup xAI launched last year and recently raised $6 billion from investors such as Andreessen Horowitz, Sequoia Capital, and Saudi Arabian investor Prince Alwaleed bin Talal. The funding round in May pushed the company’s valuation to $24 billion — higher than the previously expected $18 billion. The company counts AI talent from DeepMind, OpenAI, Google Research, Microsoft Research, and other companies among its employees.
Advertisement
Musk has positioned Grok, which also debuted in 2023, as a ChatGPT rival he hopes will be used to deliver Americans news. Musk has said the chatbot has “a bit of wit” and “a rebellious streak.” But the chatbot has had issues. For example, it wrongly stated that U.S. Vice President Kamala Harris was shot on July 14 rather than former President Donald Trump.
Musk recently polled X users to see whether they think his EV company Tesla — which just reported sinking profits in a disappointing quarterly earnings report — should invest $5 billion in Grok’s maker xAI. Nearly 70% of about 960,000 respondents said yes.
Elon Musk’s X Sparks Privacy Concerns With Grok AI Data Usage
]
Elon Musk’s X Sparks Privacy Concerns With Grok AI Data Usage X implements controversial data-sharing settings for its AI without user consent, raising alarm among privacy advocates
The realm of social media is undergoing dramatic transformations, particularly with Elon Musk’s continued evolution of X, formerly known as Twitter. Recent moves by the platform to implement its Grok AI system using user-generated data are raising significant concerns about privacy and consent.
Upon taking control of what was once Twitter, Musk signaled ambitious plans, which he claims will result in “the world’s most powerful AI by every metric by December this year.” This ambitious goal hinges on utilizing a colossal AI training cluster, empowered by an astounding 100,000 Nvidia graphics processing units (GPUs) designed to process vast expanses of data rapidly. However, the crux of the issue lies not just in the technology but in how it sources its data.
In a surprise turn, reports suggest that X has automatically opted in all users to enable their posts and interactions to train the Grok AI system. Notably, this change came into effect without prior user consent, with the new Data Sharing feature hidden away in settings and only accessible via the platform’s web version, frustrably leaving mobile app users out in the cold. A user on X, who goes by the handle @EasyBakedOven, unearthed this unexpected feature which, according to X, allows posts, interactions, and inputs to be used for training and fine-tuning purposes.
When exploring the intricacies of these developments, privacy advocates find themselves in a particularly vexing position. X’s under-the-radar modifications have drawn scrutiny not just from users but also from privacy watchdogs. Questions arise as to whether users should expect transparency when it comes to technology that directly affects their data and privacy.
For users concerned about data privacy, disabling Grok AI’s integration with their posts involves laborious steps. It cannot be accomplished through the app, which is a significant oversight considering that many users predominantly navigate social media through their mobile devices. Instead, one must log on to X’s desktop version, navigate to Settings, then Privacy and safety, and finally select Grok to uncheck the data-sharing option. Following this procedure becomes necessary for users who wish to maintain a degree of autonomy regarding their personal data.
Compounding the situation, Meta recently faced similar scrutiny regarding its data usage policies for training its AI models on platforms like Instagram and Facebook. After public outcry, Meta proceeded to make its data-sharing plans more transparent, though they faced allegations of complicating the opt-out process. Information like this can feel overwhelming since it highlights a broader trend: tech companies are increasingly utilizing user-generated content to enhance AI services, often with limited user agency.
Amid these controversies, a sense of irony has crept in; especially as Musk previously criticized data processing practices at various tech firms. His current actions face parallels, revealing the complexities involved in balancing technological advancement with ethical considerations in user privacy.
Fans of cryptocurrency, meanwhile, have found themselves relatively unaffected by Twitter’s latest cosmetic changes. The recent removal of the Bitcoin emoji hashtag—a decision made public shortly before a major cryptocurrency conference—seems not to disturb the community deeply. Many attendees shrugged off the disruption, finding it neither significant nor alarming.
Elon Musk himself tweeted about the rationale behind the broader removal of what he described as “special hashtags,” advocating for transparency in promotions linked to advertisements. The broader implications of this policy shift invite speculation, as it may be his attempt to recast the platform’s engagement strategies.
Notably, this isn’t the first time Musk’s influence reverberated through social media and its relationship with cryptocurrency. Crypto-related emojis and hashtags have sparked intrigue and engagement, but their removal does not strike the same chord with followers; a sign perhaps, of shifting priorities within the vibrant crypto community.
As Musk reshapes X, users are left pondering the intertwined relationship between user data privacy and corporate interests. With many feeling disenchanted yet helpless, will the true trajectory lead toward a more ethically responsible use of AI, or will the drive for technological supremacy overshadow individual rights and freedoms?
The concern becomes increasingly pressing in a landscape where tech companies must ensure accountable use of user data, emphasizing the need for user consent and clarity on data utilization. As corporations expand their AI capabilities, maintaining transparency and user trust should not merely be a regulatory checkbox. It must become a framework guiding how technology interacts with our lives.