What security dangers come with bringing your own AI?

There are many publicly available artificial intelligence tools as a result of the growth of generative AI, but what dangers arise when using external AI tools with company data?

Interest in generative artificial intelligence (GenAI) technologies has skyrocketed after Open AI’s ChatGPT debut in November 2022. It has been used for many things, from creating emails to supporting chatbots, because of its capacity to produce a response in response to a query or request.

According to Microsoft’s latest Work Trend Index study, which was based on a poll of over 31,000 professional employees, 75% of knowledge workers already use GenAI in some capacity at work, and over half of those questioned began using it within the previous six months. Nonetheless, over 80% of GenAI users are bringing their own AI to work, and the proportion rises somewhat when small enterprises are taken into consideration. It is important to note that people of all ages are adopting new technology, not simply younger users, who are generally more receptive to it.

We battle more and more with what is referred to as “digital debt” as more information is created and needs to be handled. Email overload is one instance of this. According to a Microsoft study, 85% of emails are read in less than 15 seconds, which explains why individuals are eager to adopt products that make their workdays more efficient.

Nick Hedderman, senior director of Microsoft’s contemporary work business division, believes the epidemic has hastened the accumulation of digital debt that has been there for decades. Sixty-eight percent of those surveyed stated they were having difficulty keeping up with the amount and speed of work. Almost half reported feeling burned out.

Professionals usually employ generative AI tools that may be available online (like ChatGPT) or on cell phones (like Galaxy AI). Regretfully, corporate control is not available for these open-source solutions. Additionally, when an internet tool is free, the user is usually the product since others may use their information.

“You should approach it similarly to any other social media platform if it is free. What kind of data is it using for training? Basically, are you the commodity now? Sarah Armstrong-Smith, Microsoft’s chief security officer, said. “Is anything you enter used to train models? How are you making sure that information is stored safely and isn’t being used for other reasons?

Because it depends on shadow IT—hardware or software utilized in an organization that is not under the IT department’s control—the usage of external generating tools is more of a data governance issue than a GenAI one.

“Sanctioned and unsanctioned apps have always existed. Data exchange between cloud platforms has always presented difficulties, according to Armstrong-Smith. “You have an issue with data governance and data leakage if it’s that simple to copy and paste something from any business system into a cloud service, whether it’s a generative AI software or another app. Fundamental problems with data governance, control, and other related topics never go away. Actually, it draws attention to the absence of control and governance.

Data governance
There are two issues with data governance when employing external generative AI technologies.

The first is data leakage, which occurs when users copy and paste potentially private data into an online tool over which they have no control. Others may be able to obtain this data and use it to train AI technologies.

If material is introduced to an organization’s knowledge base that has not been validated or authenticated, there is also leakage into the organization. Instead of verifying the data to make sure it is factually true, as they would be more likely to do when conducting an online search, users are far too frequently presuming that the information supplied by an external GenAI service is valid and suitable.

Armstrong-Smith warns that bringing a random dataset—which you haven’t verified and don’t know what it was trained on—into a corporate setting, or vice versa, poses a risk of contaminating the model or algorithm itself by introducing unconfirmed data into the corporate dataset.

Since possibly inaccurate or deceptive material is added to a knowledge base and utilized to guide decision-making processes, the latter is the more significant issue. Additionally, it may contaminate datasets used to train internal AI, which would lead to inaccurate or misleading results from the AI.

Inappropriate usage of GenAI technologies has already produced subpar outcomes. The legal industry is testing generative AI as a potential tool to help with legal document writing. In one example, the generative AI created fictitious cases that were presented to the court after a lawyer utilized ChatGPT to write a submission.

Armstrong-Smith states, “You have to be mindful of the fact that it is business data in a corporate environment.” Given that it’s a business setting, what resources are already available to ensure that all governance is in place? In addition to resilience, it will include security. All of those features will be intentionally included in it.

It may be shown that a digital tool is necessary if a sizable percentage of workers regularly use external apps. Finding the use cases is the greatest way to determine the best generative AI solution. In this manner, the best technology may be used to satisfy staff requirements and blend in with their current workflow.

Using a corporate generative AI tool instead of an open platform like ChatGPT has the major benefit of maintaining data management throughout the development process. Corporate data may be safeguarded as the tool is maintained inside the network’s bounds. By doing this, potential leaks from utilizing outside tools are reduced.

Using a corporate AI solution offers the protection of the AI provider protecting the back-end system. It is important to remember that the user organization is still in charge of front-end protection, just like in use cases and deployment models. When using generative AI technologies, data governance is still crucial in this situation and needs to be seen as a necessary component of any development process.

Armstrong-Smith explains, “We’ve always called it a shared responsibility model.” “The infrastructure and platform are the responsibility of the platform providers, but the client is in charge of how you utilize it in terms of your data and customers. The proper governance must be in place. They only need to make use of the many controls that are currently in place by default.

Awareness among users
In order for workers to use generative AI technologies once they are available within the organization, they must be aware of their existence. It may be difficult to promote their adoption if staff members have established a workflow that depends on utilizing third-party GenAI systems.

Therefore, a campaign to raise awareness of the generative AI tool would inform people about its usability and accessibility. Users from external platforms may be sent to the internal GenAI tool by internet moderation systems.

Although hopes may have peaked, generative AI is here to stay, and its applications are only going to expand and become more common.

“I believe that a lot of businesses, including Microsoft, are concentrating on this idea of agentic generative AI,” Henderson adds. Here, you may use a business process to determine how an agent could help an organization inside. Within an organization’s network, an agent might do certain tasks like mailing bills or setting up meetings.

Data privacy is still a major worry even if generative AI is a new technology that might reduce tedious and time-consuming jobs. In order to preserve the integrity of their data, organizations must inform staff members about the dangers of utilizing outside tools and ensure that they have the right generative AI tools on hand.

“As we know, technology will become more affordable as it becomes more commoditized, which means AI will become more widely used and you will have more options for which model to use,” says Armstrong-Smith.

Leave a Reply

Your email address will not be published. Required fields are marked *