Apple Restricts Use of ChatGPT, Other AI Tools for Employees Over Concerns of Data Leak: Report

[ad_1]

Apple has restricted the use of ChatGPT and other external artificial intelligence tools for its employees as Apple develops similar technology, the Wall Street Journal reported on Thursday, citing a document and sources.

Apple is concerned about the leak of confidential data by employees who use the AI programs and has also advised its employees not to use Microsoft-owned GitHub’s Copilot, used to automate the writing of software code, the report said.

Last month, OpenAI, the creator of ChatGPT, said it had introduced an “incognito mode” for ChatGPT that does not save users’ conversation history or use it to improve its artificial intelligence.

Scrutiny has been growing over how ChatGPT and other chatbots it inspired manage hundreds of millions of users’ data, commonly used to improve, or “train,” AI.

Earlier Thursday, OpenAI introduced the ChatGPT app for Apple’s iOS in the United States.

Apple, OpenAI and Microsoft did not respond to Reuters request for comment.

Sam Altman, the CEO of OpenAI, told a Senate panel on Tuesday the use of artificial intelligence to interfere with election integrity was a “significant area of concern”, adding that it needs regulation.

“I am nervous about it,” Altman said about elections and AI, adding rules and guidelines were needed.

For months, companies large and small have raced to bring increasingly versatile AI to market, throwing endless data and billions of dollars at the challenge. Some critics fear the technology will exacerbate societal harms, among them prejudice and misinformation, while others warn AI could end humanity itself.

Meanwhile, US lawmakers are grappling with what guardrails to put around burgeoning artificial intelligence, but months after ChatGPT got Washington’s attention, consensus is far from certain.

Interviews with a US senator, congressional staffers, AI companies and interest groups show there are a number of options under discussion.

Some proposals focus on AI that may put people’s lives or livelihoods at risk, like in medicine and finance. Other possibilities include rules to ensure AI isn’t used to discriminate or violate someone’s civil rights.

© Thomson Reuters 2023 


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

[ad_2]
#Apple #Restricts #ChatGPT #Tools #Employees #Concerns #Data #Leak #Report

Leave a Comment