AI tools abound – But what does your company use? How do you use them? Is there anything that’s off limits?

Would different people in your organization answer those questions differently? Probably – which means it’s time for an AI Policy, a document that outlines how you approach AI as a company.

How to Create an AI Policy



Customers are starting to ask questions. Primarily, they want to know how AI tools affect the product or service you’re delivering to them.

Skepticism around AI tools often comes from deeper concerns that you’re automating what you used to do manually and exposing them to errors or causing them to overpay for something they could now do themselves with an AI tool. 

Making it clear how you think about AI and what tools you use for specific purposes with an AI Policy creates transparency. Additionally, a central document sharing how you use AI frees you from having to note on every piece of content you create or product you sell that you used an AI tool in the process. Instead, you can point to the policy, which clearly outlines how you think about AI, the role of humans when using AI tools, and what you will and won’t use AI tools to assist.

With an AI Policy, you’re also able to showcase if you are using new tools that ultimately give customers a better product or service compared to the competition. You may have implemented better reporting or more in-depth testing based on machine learning data – these will become differentiators in what you offer and should be showcased and promoted.

To create an AI policy, start by identifying a team and reviewing legal considerations, then craft a policy and outline the tools you use in your organization.


Like most internal policies, it’s helpful to have a cross-functional team work together to create these policies. Engineering, Sales, and Marketing likely use different tools and don’t necessarily know what the others are using. Having a representative from each of those areas, as well as a C-level sponsor is a good starting point for your team. Before tackling an AI Policy, make sure the stakeholders involved understand the legal issues at play.

Let’s start with the most basic application of policies – the law. Much of the legal guidance right now consists of recommendations and reminders to adhere to policies already in place. The guidance found today falls in five primary categories: 

  1. Data Protection and Privacy: Companies must comply with relevant data protection and privacy regulations when handling personal data collected and processed by AI technology (e.g. GDPR, CCPA, HIPAA). 

    *Note that most AI tools require a terms checkbox that states something to the effect of the information you input into the tool can then be accessed by the tool. Therefore, any protected data should not be used as prompt inputs in generative AI tools. This applies to personal data but also remember other data that may be important to keep private, like data about an upcoming product announcement or customer application data. 
  2. Intellectual Property: Companies must ensure that their use of AI technology does not infringe on existing intellectual property rights, such as copyright and patents.

    *Note that material solely created by AI tools can’t be copyrighted. Work product must have a significant human component to be eligible for copyright protection.
  3. Liability: Companies should consider potential liability issues associated with the use of AI technology, such as errors and mistakes made by the technology
  4. Discrimination and Bias: Companies must ensure that any AI technology employed is not discriminatory or biased against any individual or group.
  5. Security: Companies must ensure that appropriate security measures are in place to protect any AI technology employed and the data collected and processed by it.

These legal considerations should be the first checkpoint for any new tool or process you’re evaluating. 


Now it’s time for you to decide –  How do you think about AI? What are you willing to do with AI within your company? 

Your corporate AI Policy should be a straightforward statement or list of points (see TREW Marketing’s AI policy here).

It can evolve, but it should address answers to the following questions:

  • How do you use AI within your company?
  • How does AI to contribute to what you offer to customers?
  • Who is accountable for product created by AI tools?
  • What limits do you put in place on AI tools?
  • What level of transparency will you have around tools that you use?

If you’re looking for a starting point, consider using the Marketing AI Institute’s AI Manifesto. It’s a document that outlines how the organization thinks about AI, and it’s licensed under Creative Commons. The TREW Marketing AI Policy was derived from this version. 


Tool transparency works two ways. By providing a list of AI tools you use, you’re communicating to customers what they can expect, and you’re also creating a specific list for your internal staff.

If someone is interested in using a new tool that’s not on the list in a way that will impact their performance or a customer product, the tool should be discussed first, evaluated based on the legal considerations outlined above and the AI Policy you’ve outlined specifically for your company. 

See TREW’s list of tools here. We’ve organized these tools by category and given specific examples of how we use them.


By forming a cross-functional team, assessing legal considerations, creating a company-specific policy, and outlining your tools, you can use AI in a safe and helpful way throughout your organization. 

AI tools are changing rapidly, so as a best practice, be sure to evaluate your tools list quarterly. 

Looking to get more out of AI tools for content development?

Watch our on-demand webinar to learn more about current tools, best practices, and how to prompt AI tools in a way that produced helpful results.

Improving Prompts in Generative AI Tools