From the C-suite down, the Seer team agrees that Artificial Intelligence is the biggest disruptor we’ll see in our industry to date, transforming both how we search and how we work. With that, Seer needed to evolve with it and show our team there’s a future for them in this next iteration. The biggest thing our leaders were asking was how to prepare our team for these shifts, and it’s clear Seer’s not the only organization asking these questions.
The Marketing AI Institute’s 2024 State of Marketing AI Report continues to show education being the biggest missing component, stating ““This is the fourth-annual State of Marketing AI Report, and every year, a lack of AI education and training is cited as the most common barrier to AI adoption in marketing.”
Before hopping straight into a Learning & Development roadmap for the Seer team and our clients, there were initial steps we needed to complete. We worked in lock step cross-functionally to provide focused support that would best aid in the team’s daily tasks and provide the biggest impact. Step 1 in this process was developing an AI Council: a centralized, cross-organizational team that can drive strategy across divisions, legal, operations, and collect executive buy-in. We’ll dig deeper on our findings and advice in building an AI Council in a future post.
Aligning on the Basics
First things first, Seer didn’t want any misunderstanding on our AI stance - we do not see this as a replacement for humans doing these tasks, but rather opportunities to re-imagine how we’re approaching the work we’ve done for so long and use this as a tool to make that day to day more efficient and enjoyable. Seer prides itself on being on the vanguard of innovation within our industry, but not without clearly defining the legal and ethical guardrails we require to keep our organization, our employees, and our clients safeguarded in the process.
To make our ethos clear, we developed an AI Policy that the Seer team can continue to reference throughout their AI journey.
Steps for Creating an AI Policy
- Work with Leadership and Legal - to make your purpose and guidelines for Artificial Intelligence clear, it's important to ensure you’re aligned from the very top on the goal for AI within the organizations and what’s acceptable vs not. From there, work with legal counsel to ensure that explanation is clearly communicated, well-defined, and incorporates the legalities that we on the business front may not have naturally considered.
- Incorporate Accountability & Ethical Considerations - Clearly defining that responsibility will never be outsourced, and we are ultimately accountable for the work we produce. It's important to clearly outline what we will not be using AI for - at no point should we be misleading or manipulating customers, we are responsible for reviewing outputs for biases and inaccuracies, and employees should not be impersonating any other person without explicit, written consent and a definable use case.
- Share Approved vs Unapproved Use Cases - Clarity is kindness, and the clearer we can be for our employees around acceptable uses vs unacceptable uses, the more empowered they feel about testing on their own. We detail out which use cases are acceptable to roll with on their own (e.g. using publicly available data for brainstorming), use cases that require approval (e.g. leveraging specific data sets that opted in clients approved for testing in secure platforms), and cases that are unacceptable (e.g. not inserting PII within these conversational models).
Training the Team on Our AI Policy
Seer shared the AI Policy with our team to review on their own and then discussed the meaning and purpose of this policy within our Company All Hands. To better foster individual support, we used our learning management system, Seismic Learning, to break this down further.
Within our AI Policy training, we further emphasized the purpose, considerations, and use cases, providing knowledge checks throughout to help solidify these points.
We then quizzed our team at the end, with questions asking employees to:
- Acknowledge they fully read Seer’s AI Policy.
- Confirm that, by using AI within their work, they agree to comply with the guardrails set within Seer’s AI Policy.
- Demonstrate they understand the purpose of the policy.
- State whether example scenarios shared are acceptable or unacceptable use cases, thereby helping our AI Council evaluate team comprehension.
- Confirm the methods we use to gain approval for specific use cases.
From our training, we evaluated team comprehension to identify potential gaps and incorporated training for those gaps within our ongoing live workshops, which we’ll dive into in the next steps in our training program.
As a part of this training, it was important for Seer to open clear avenues of communication. Our AI Council does weekly check-ins across the organization on what’s being worked on with AI, offers individual support across teams, and shares a centralized email for all formal questions and concerns to be shared with the full AI Council and leadership team members to investigate fully.
With our AI Policy in place and the team trained on it, we shared ChatGPT Team licenses with all account team members. This allows us to ensure that the conversations held with ChatGPT are not being used in OpenAI’s training data. While this is our tool of choice for now, we remain open and testing as the market evolves.
From there, we moved into tactical training on foundational elements of understanding these conversational models and how to use them. You can dive more into the disruption analysis process with our VP of Generative AI, where Alisa Scharf details how Seer is integrating AI into our offerings.
We’ll continue to share our experience in leading an AI transformation at Seer in an ongoing series of Seer’s AI learning and development for our team!