Multi-Platform Guide

Deploy on any platform and build the AI Task Augmentation Analyst hands-on

Your Methodology Is Portable

The process you learned — define, data, prompt, iterate, deploy — works on any LLM platform that supports custom instructions and file uploads. Your prompt and methodology are 90% portable. The last 10% is platform-specific packaging.

🟢

OpenAI Custom GPTs

ComponentWhere It Goes
System prompt“Instructions” field
Knowledge files“Knowledge” upload section (up to 20 files)
Conversation starters“Conversation starters” field (up to 4)
CapabilitiesToggle: Web Search, Canvas, DALL-E, Code Interpreter

Strengths

Largest user base, built-in sharing/publishing, Code Interpreter for data analysis

Limitations

Knowledge file size limits, no real-time file editing, system prompt visible to determined users

Tip: Enable Code Interpreter if your GPT works with CSV/spreadsheet data. Upload the AI Wins Dashboard template as a knowledge file.

🟠

Anthropic Claude Projects

ComponentWhere It Goes
System prompt“Project Instructions” (custom instructions field)
Knowledge files“Project Knowledge” (upload files to the project)
Conversation startersNot natively supported — include example prompts in your instructions
CapabilitiesArtifacts (structured outputs), Analysis tool (code execution)

Strengths

Larger context window (200K tokens), Artifacts for structured outputs, strong analytical reasoning

Limitations

No built-in sharing marketplace, no image generation, projects are workspace-scoped

Tip: The larger context window means you can upload more O*NET files (potentially all tiers). Use Artifacts for interactive table outputs.

🔵

Google Gemini Gems

ComponentWhere It Goes
System prompt“Instructions” field
Knowledge filesUpload files directly to the Gem
Conversation startersNot natively supported
CapabilitiesGoogle Search integration, code execution

Strengths

Deep Google Workspace integration (Sheets, Docs), built-in Google Search

Limitations

Smaller knowledge file limits, less mature custom tool ecosystem

Tip: Leverage Google Sheets integration for direct dashboard population. Use Google Search capability for real-time labor market data.

What's Portable, What's Not

ComponentPortable?Notes
System promptYesCopy-paste across platforms with minor adjustments
Knowledge filesYesSame files work everywhere
Scoring rubricsYesMethodology is platform-agnostic
Output formatMostlyMarkdown tables work everywhere; Artifacts/Canvas differ
Conversation startersNoPlatform-specific; embed as examples in instructions where not supported
CapabilitiesNoEach platform has different toggles

Hands-On: Build the AI Task Augmentation Analyst

Follow these steps to build your own version. This walkthrough uses OpenAI's GPT Builder, but the methodology works on any platform.

1

Gather Your Data

Download O*NET database files and prioritize using this tiering:

TierFilesUpload?
Tier 1Occupation Data, Task Statements, Task RatingsAlways
Tier 2Work Activities, Skills, Technology SkillsIf limits allow
Tier 3Abilities, Knowledge, Work ContextOptional

Then create your AI Wins Dashboard template (a simple spreadsheet with columns: Job Title, Time Saved, Quality Delta, Next Pilot) and export it as a PDF.

2

Set Up the GPT

Name: AI Task Augmentation Analyst

Description: Analyzes job descriptions to assess AI task automation and generate dashboards.

Instructions: Paste the full system prompt (v3 — the latest version from the Iterate & Deploy section)

3

Upload Knowledge Files

Upload in this order:

  1. O*NET data files (Tier 1 first, then Tier 2)
  2. AI Wins Dashboard template (PDF)
  3. Example analysis file
4

Configure Conversation Starters

“Analyze this job description for AI augmentation:”
“What are the top tasks we can automate for this role?”
“Generate AI Wins Dashboard for this job description.”
“Give me risk vs moat scores for this position.”
5

Enable Capabilities

Web Search — for real-time labor market context
Canvas — for editing outputs collaboratively
Image Generation — for quadrant visualizations
Code Interpreter — for CSV/spreadsheet operations
6

Test It

Paste this sample job description and verify the output:

“Marketing Coordinator at a B2B SaaS company. Responsibilities include scheduling social media posts, writing blog drafts, coordinating with design team, tracking campaign metrics in HubSpot, responding to inbound inquiries, and managing event logistics for quarterly webinars.”

Expected Output Should Include:

  • SOC code match with confidence level
  • 6 tasks scored on Automation Risk and Strategic Moat
  • Each task classified as Automate, Augment, or Human-led
  • An AI Wins Dashboard row showing ~8-10 hrs/week saved
  • A pilot recommendation for AI-assisted blog drafting

Quick Reference Checklist

Use this checklist when building any custom GPT:

Phase 1: Define

  • ☐ Articulated the specific decision this tool helps make
  • ☐ Identified the target user(s)
  • ☐ Defined expected input format
  • ☐ Designed output format (sketch tables/dashboards first)
  • ☐ Listed what the tool should NOT do

Phase 2: Data

  • ☐ Identified authoritative data source(s)
  • ☐ Created a tiered priority for file uploads
  • ☐ Built a data fallback chain

Phase 3: Prompt

  • ☐ Wrote Identity & Purpose (2-3 sentences)
  • ☐ Defined step-by-step workflow
  • ☐ Created explicit scoring rubrics
  • ☐ Specified exact output formats with headers
  • ☐ Added uncertainty handling and clarification triggers

Phase 4: Iterate

  • ☐ Tested with standard and ambiguous inputs
  • ☐ Compressed prompt (extracted examples, condensed rubrics)
  • ☐ Added actionability (recommendations, next steps)
  • ☐ Had at least one other person test it

Phase 5: Package

  • ☐ Set name and description
  • ☐ Uploaded knowledge files in priority order
  • ☐ Added conversation starters
  • ☐ Enabled appropriate capabilities
  • ☐ Created supporting materials (cheat sheet, template, example)

Phase 6: Test & Refine

  • ☐ Ran 5+ diverse inputs through the tool
  • ☐ Verified output consistency across runs
  • ☐ Gathered user feedback
  • ☐ Incorporated feedback into next iteration
💡

Key Takeaways

  • Your prompt and data are 90% portable — the last 10% is platform-specific packaging
  • Choose your platform based on strengths — GPTs for sharing, Claude for deep analysis, Gemini for Google Workspace integration
  • Where conversation starters aren't supported, embed example prompts directly in your instructions
  • Always test with the sample job description — verify your output matches the expected format before sharing
  • Use the checklist — it's your quality gate for any custom GPT you build