facebook twitter instagram linkedin google youtube vimeo tumblr yelp rss email podcast phone blog search brokercheck brokercheck Play Pause
ChatGPT and the Risk of Data Loss Thumbnail

ChatGPT and the Risk of Data Loss

By: Bill Howard - Manager of IT Services


According to Wikipedia, “ChatGPT is an artificial intelligence (AI) chatbot developed by OpenAI and released in November 2022. It is built on top of OpenAI's GPT-3.5 and GPT-4 families of large language models (LLMs) and has been fine-tuned (an approach to transfer learning) using both supervised and reinforcement learning techniques.” ¹

Chat GPT was released November 30, 2022, as a “research preview”. Sam Altman, the founder of OpenAI, reported via Twitter that they had reached 1 million subscribers within 5 days. By January 2023, this number reached 57 million and by February it increased to over 100 million. OpenAI saw their average monthly website traffic increase from 18 million to over 672 million visits to the site. This has moved them into the top 50 global websites, and they have the fastest growing app in the world with over 100 million users in 2 months according to Digital-adoption.com.

OpenAI is not publicly traded on the stock market but is now valued at over $29 billion, up from a 2021 valuation of $14 billion. Microsoft has been a major investor, investing over $3 billion since 2019 and has reported in January 2023 that it will continue to invest upwards of $10 billion to the project.

This has all been exciting news for OpenAI and its subscribers, however, what is the downside? Corporate users have found risk in secure data escaping and being stored as part of the AI infrastructure. For example, according to Robert Lamos, Contributing Writer for Dark Reading, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint presentation. In another case, a doctor input his patient's name, and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. ²  Since the data they entered is stored for learning within ChatGPT, these users risk their data being given to another user based on their query.

Cyberhaven, a data security service company, reported that they recently detected and blocked attempts to input company data into ChatGPT. They are a large company of 1.6 million workers and over 4% (64,000) of their employees were entering sensitive corporate data, putting it at risk of leaking to the public.

As of March 22, 2023, OpenAI’s CEO has confirmed there was a bug in the system that allowed users to see random conversations of other users.

Karla Grossenbacher, a partner at the law firm Seyfarth Shaw, stated that "Prudent employers will include — in employee confidentiality agreements and policies — prohibitions on employees referring to or entering confidential, proprietary, or trade secret information into AI chatbots or language models, such as ChatGPT." She wrote, "On the flip side, since ChatGPT was trained on wide swaths of online information, employees might receive and use information from the tool that is trademarked, copyrighted, or the intellectual property of another person or entity, creating legal risk for employers." ³

While you might have a great desire to investigate this powerful new technology, please be aware of the information you are entering and its potential for escape.      

 

¹  “ChatGPT.” Wikipedia, 18 Apr. 2023  https://en.wikipedia.org/wiki/ChatGPT.
²  Lemos, Robert. “Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears.” DARKReading, 07 Mar. 2023,  https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears.
³  Grossenbacher, Karla. “Employers Should Consider These Risks When Employees Use ChatGPT.” Bloomberg Law, 16 Feb. 2023,  https://news.bloomberglaw.com/us-law-week/employers-should-consider-these-risks-when-employees-use-chatgpt.