More custom GPTs are being released every day to OpenAI’s GPT library, including GPTs released by brands we already know and trust:
- Adobe Express: Stand out with Adobe Express. Quickly and easily make impactful social posts, images, videos, flyers, and more.
- Canva: Effortlessly design anything: presentations, logos, social media posts and more.
- Supermetrics: Fetch and analyze marketing data from Facebook Ads, Google Ads, Google Analytics 4, and Google Search Console. Trusted by over 1 million marketers worldwide for analysis.
Recently a question emerged at Seer and we wanted to share our process for answering that question with our community:
How do I know when it’s safe to use a custom GPT?
Increasing our speed to value while keeping our clients’ data secure at Seer is among our highest priorities. To that end, we strongly recommend only using non-public marketing data within tools that we can trust with our data. The worst case scenario would look something like the Samsung data leak in 2023.
It’s important to us and our clients that we’re able to explore innovative new solutions while keeping their data safe. To that end, here was our process for reviewing Supermetrics’ GPT.
Step 0: Roll out company wide education, policy, and a governance council
In Q4 we released our AI policy to the full Seer team. This work was an extensive collaboration between our legal council, our executive team, and those most involved in innovation for our clients. We aimed to strike a balance between the need to experiment and the need to keep our clients’ data safe.
Step 1: Surface new opportunities for collaboration
Wil caught wind of the new GPT and shared it out within our Slack channel focused on GenAI knowledge sharing.
Step 2: Reference your AI Policy
Our Director of Paid Media immediately referenced our AI policy. We’ve asked all clients to opt into our ability to use their non-public marketing data with Generative AI. As part of that policy, we will only use non-public data with tools that are guaranteed to not train on our inputs and conversations. For now at Seer, our tool of choice is Chat GPT for Teams.
Step 3: Leverage your AI Council
Step 4: Wait for your AI Council to respond
Heh, let’s consider this an optional step. But really, I was proud of the team for understanding the need to pause and the need to review our terms. We want to move quickly, but security outweighs speed.
Step 5: When in doubt, reach out to your support rep
In this instance we were confident that as long as the team was abiding by our AI Policy we were safe to test this tool. But, it’s the first time that the situation has come up and we wanted to be sure.
Step 6: Test, learn, share, repeat
The end result of this sequence could result in nothing. Or, it could result in improvements to some of our current workflows. We won’t know until we test and learn. Once we do so, we’ll be sure to update our community on what we’ve learned.