Generative artificial intelligence (AI) security policy template
Deploying Securely with ChatGPT
With the launch of ChatGPT, and now GPT-4, the buzz around generative AI has been deafening.
I have heard of organizations taking a variety of positions when it comes to the security implications of this incredibly powerful new tool, ranging from doing nothing to banning it completely.
Neither approach is correct.
While there are potential risks posed by using generative artificial intelligence (AI) tools, the productive power they unleash more than compensates.
So in this post, I’ll provide a policy template that can help your organization use these tools as securely as possible. Obviously it will require customization (at the very least replacing ORGANIZATION_NAME) based on your circumstances and risk tolerances, but it should give you a place to start.
I’ll plan to update this over time and keep it an evergreen document as things evolve.
Purpose
This policy identifies acceptable methods for interacting with generative artificial intelligence (AI) tools including, but not limited to, ChatGPT. Generative AI tools can enhance productivity but potentially introduce security risks due to the fact that they can leak sensitive data, introduce security vulnerabilities into code, and provide incorrect information when designing physical or virtual architectures.
Scope
This policy applies to all ORGANIZATION_NAME employees in situations where the employee’s action may expose ORGANIZATION_NAME systems or data to generative AI tools.
Policy
The Chief Information Security Officer (CISO) shall:
Within 30 days, create a machine-readable, searchable database of generative AI tools, by name, that classifies them as either:
Unconstrained; or
Constrained.
For all such tools, the CISO shall identify which categories of data they are certified to handle.
Update this database every subsequent 30 days.
The General Counsel shall:
Advise the CISO on the initial creation and maintenance of the generative AI tool database.
Review licenses for generative AI tools included in this database, as well as other information provided by their publishers, to determine whether these tools are constrained or unconstrained.
The Chief Privacy Officer
Advise the CISO on the initial creation and maintenance of the generative AI tool database.
All employees shall:
Only use ORGANIZATION_NAME-licensed, -provided, or -built generative AI tools for in accordance with the Acceptable Use Policy (AUP).
Only log in to generative AI tools using their ORGANIZATION_NAME email or credentials.
Assume a generative AI tool to be unconstrained unless the CISO has designated otherwise.
Never input the following categories of data into unconstrained generative AI tools:
Personally Identifiable Information (PII), as defined by the Department of Labor, or Protected Health Information (PHI), as defined by the Health Insurance Portability and Accountability Act (HIPAA), with the following exceptions:
The employee’s own PII or PHI. ORGANIZATION_NAME discourages this practice, makes no warranties regarding what will happen to the data, and notes that many generative AI tools advise against doing so.
PII or PHI that the owner of the data in question has intentionally published on the Internet or other public media while not under duress and which ORGANIZATION_NAME has no obligation to protect. For example, a medical diagnosis posted on social media by a public figure could be input into a generative AI tool if done in accordance with the AUP.
Material non-public information (MNPI), as defined by the Securities and Exchange Commission (SEC).
Passwords, application programming interface (API) keys, or any other secrets that would allow an otherwise unauthorized actor to gain access to ORGANIZATION_NAME systems or data.
Any other information designated in writing by the employee’s management chain and recorded in a machine-readable database searchable by the employee.
Only input data into constrained generative AI tools that the tool in question is certified to handle. For example, if a generative AI tool is certified to handle social security numbers but not medical diagnoses, you may input the former but not the latter.
Prior to any publication of any information on the company web site, social media handle, or any other public forum, analyze it for aggregation risk.
While the publication of individual pieces of information might not violate this policy, it is important to note that generative AI tools are constantly ingesting publicly-available information for training purposes.
This could allow such a tool to infer - and reproduce - information not intended to be made public. If the employee has any doubt, s/he shall request permission in writing from the next level of the management chain.
Not treat generative AI tools as infallible, paying special attention to reviewing responses to security-related questions or in security-sensitive situations, such as when they:
Generate code snippets in response to a prompt;
Provide information regarding supposed best practices for system design;
Offer risk management advice in cybersecurity, privacy, legal, or physical safety contexts.
Business unit general managers shall ensure:
All employees in their business unit are trained on and have acknowledged this policy with 30 days of publication.
The creation of a machine-readable and searchable database of all categories of information - for their specific business unit - that cannot be inputted into unconstrained generative AI tools. This includes but is not limited to:
Source code
Business plans or strategies
Machine telemetry
All employees accountable for relationships with entities in ORGANIZATION_NAME’s software supply chain shall:
Determine if the relevant entity has access to information this policy forbids from inputting into unconstrained generative AI tools. Document the results in a machine-readable format.
If the entity does have access to such data, request in writing that the entity provide its policy regarding unconstrained generative AI tools, or a summary of the provisions, within 30 days.
15 days after receipt of the entity’s policy, determine whether it is more or less restrictive than ORGANIZATION_NAME’s policy. Document the results in a machine-readable format.
If the entity does not provide a policy, does not have one, or it is less restrictive than ORGANIZATION_NAME’s, either:
Terminate the entity’s access to data forbidden from entry into unconstrained generative AI tools with 15 days of determination.
Request risk acceptance per this policy.
Re-assess the type of data accessible by the entities in the software supply chain every 30 days, and take the above steps if anything has change.
Risk acceptance
Business unit general managers have the authority to accept residual risk resulting from deviations from this policy if, in their judgement, the potential benefit to ORGANIZATION_NAME, its customers, or other stakeholders warrants the move.
Risk acceptance shall follow the PRIDE decision-making framework, with the following roles:
Perform
Employee requesting risk acceptance
Recommend
Employee requesting risk acceptance
Input
Vice President of Engineering
Decide
Business-line general manager
Ensure consultation
Chief Information Security Officer
General Counsel
Chief Privacy Officer (if data in question is PII or PHI)
Business unit general managers shall document approvals or denials or risk acceptance decisions within 15 days of receipt using a machine-readable, searchable, and access-controlled database.
Compliance
Non-compliance with this policy may result in disciplinary action, up to and including termination.
If federal, state, local, or tribal laws or regulations conflict with this policy, the former shall control.
Should any given business unit violate this policy two or more times within a 30 day period, the business unit general manager shall ensure:
A root cause analysis is conducted, documented, and the results shared with the entire business unit;
A complete retraining for the business unit; and
Re-acknowledgement by all business unit employees of this policy.
Definitions
Generative Artificial Intelligence (AI): a subset of AI techniques that involves the use of algorithms to generate new, original data. Unlike traditional AI, which is designed to solve specific tasks based on pre-existing data, generative AI algorithms can produce new data that has never been seen before.
Unconstrained generative AI tool: one with which ORGANIZATION_NAME has no legal relationship except for a click-through license agreement and which has made no specific guarantees regarding the confidentiality of information provided to it.
Constrained generative AI tools: one with which ORGANIZATION_NAME has a legal relationship in addition to a click-through license agreement (or has such a license which the General Counsel - with the input of the CISO - has affirmatively categorized as constrained) that makes specific guarantees regarding the confidentiality of information provided to it.
Re-use of this policy is authorized in accordance with the Apache 2.0 license.
I hope that you find Deploying Securely useful. If so, here are some more resources that can help:
1. My free, risk-based vulnerability management (VM) email course. Use it to build the foundation for a quantitative VM program in 5 days.
2. The Deploy Securely VM SOP. A template for running a program developed from years of real-world experience, it will save you huge amounts of time and effort.
3. StackAware’s 0 → 1 vulnerability management offering, which will get you immediate assistance in building your security program.
Appendix - below is a policy that GPT-4 generated to address the prompt “Please write me a formal information security policy for interacting with generative AI tools like ChatGPT. Use a formal writing structure but do not use boilerplate or repeat information.” Suffice to say, I don’t think we are ready for “self-regulating” AI tools yet.
Introduction
The purpose of this policy is to provide specific guidelines for employees who interact with generative AI tools, such as ChatGPT, to ensure the confidentiality, integrity, and availability of sensitive information.
Scope
This policy applies to all employees who use generative AI tools in the course of their work.
Policy
Authorization: Employees must obtain prior authorization from their manager and/or the Information Security team before using generative AI tools.
Data Sensitivity: Employees must take care to avoid exposing sensitive data when using generative AI tools. Sensitive data includes financial information, personally identifiable information (PII), and company trade secrets. Employees must never share or input sensitive data into generative AI tools.
Data Protection: Employees must ensure that output generated by generative AI tools is handled with care. This includes:
a. Avoiding sharing or forwarding output generated by generative AI tools outside of the organization or with unauthorized personnel.
b. Properly disposing of any output generated by the generative AI tool, such as deleting chat logs or other output, in a secure manner.
c. Limiting the scope of the interaction session and only providing the minimum necessary information.
Monitoring: All interactions with generative AI tools must be logged and monitored to ensure compliance with this policy. Monitoring includes the recording of input and output to ensure that no sensitive information is disclosed.
Training: Employees who interact with generative AI tools must be trained on this policy and any related procedures before being granted access to these tools.
Restrictions: The use of generative AI tools is restricted to business-related activities only. Employees may not use generative AI tools for personal or non-business related activities.
Enforcement
Violation of this policy may result in disciplinary action, up to and including termination of employment. Any violations must be reported to the appropriate supervisor or the information security team.
Conclusion
This policy is intended to provide specific guidelines to ensure the confidentiality, integrity, and availability of sensitive information when interacting with generative AI tools. Adherence to this policy is mandatory for all employees who use these tools in the course of their work.
Did you use ChatGPT to write this? :P