Companies using generative artificial intelligence tools like ChatGPT could be putting confidential customer information and trade secrets at risk, according to a report from Team8, an Israel-based venture firm.
The widespread adoption of new AI chatbots and writing tools could leave companies vulnerable to data leaks and lawsuits, said the report, which was provided to Bloomberg News prior to its release. The fear is that the chatbots could be exploited by hackers to access sensitive corporate information or perform actions against the company. There are also concerns that confidential information fed into the chatbots now could be used by AI companies in the future.
Major technology companies including Microsoft and Alphabet are racing to add generative AI capabilities to improve chatbots and search engines, training their models on data scraped from the Internet to give users a one-stop-shop to their queries. If these tools are fed confidential or private data, it will be very difficult to erase the information, the report said.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.