Multi-Platform Guide
Deploy on any platform and build the AI Task Augmentation Analyst hands-on
Your Methodology Is Portable
The process you learned — define, data, prompt, iterate, deploy — works on any LLM platform that supports custom instructions and file uploads. Your prompt and methodology are 90% portable. The last 10% is platform-specific packaging.
OpenAI Custom GPTs
| Component | Where It Goes |
|---|---|
| System prompt | “Instructions” field |
| Knowledge files | “Knowledge” upload section (up to 20 files) |
| Conversation starters | “Conversation starters” field (up to 4) |
| Capabilities | Toggle: Web Search, Canvas, DALL-E, Code Interpreter |
Strengths
Largest user base, built-in sharing/publishing, Code Interpreter for data analysis
Limitations
Knowledge file size limits, no real-time file editing, system prompt visible to determined users
Tip: Enable Code Interpreter if your GPT works with CSV/spreadsheet data. Upload the AI Wins Dashboard template as a knowledge file.
Anthropic Claude Projects
| Component | Where It Goes |
|---|---|
| System prompt | “Project Instructions” (custom instructions field) |
| Knowledge files | “Project Knowledge” (upload files to the project) |
| Conversation starters | Not natively supported — include example prompts in your instructions |
| Capabilities | Artifacts (structured outputs), Analysis tool (code execution) |
Strengths
Larger context window (200K tokens), Artifacts for structured outputs, strong analytical reasoning
Limitations
No built-in sharing marketplace, no image generation, projects are workspace-scoped
Tip: The larger context window means you can upload more O*NET files (potentially all tiers). Use Artifacts for interactive table outputs.
Google Gemini Gems
| Component | Where It Goes |
|---|---|
| System prompt | “Instructions” field |
| Knowledge files | Upload files directly to the Gem |
| Conversation starters | Not natively supported |
| Capabilities | Google Search integration, code execution |
Strengths
Deep Google Workspace integration (Sheets, Docs), built-in Google Search
Limitations
Smaller knowledge file limits, less mature custom tool ecosystem
Tip: Leverage Google Sheets integration for direct dashboard population. Use Google Search capability for real-time labor market data.
What's Portable, What's Not
| Component | Portable? | Notes |
|---|---|---|
| System prompt | Yes | Copy-paste across platforms with minor adjustments |
| Knowledge files | Yes | Same files work everywhere |
| Scoring rubrics | Yes | Methodology is platform-agnostic |
| Output format | Mostly | Markdown tables work everywhere; Artifacts/Canvas differ |
| Conversation starters | No | Platform-specific; embed as examples in instructions where not supported |
| Capabilities | No | Each platform has different toggles |
Hands-On: Build the AI Task Augmentation Analyst
Follow these steps to build your own version. This walkthrough uses OpenAI's GPT Builder, but the methodology works on any platform.
Gather Your Data
Download O*NET database files and prioritize using this tiering:
| Tier | Files | Upload? |
|---|---|---|
| Tier 1 | Occupation Data, Task Statements, Task Ratings | Always |
| Tier 2 | Work Activities, Skills, Technology Skills | If limits allow |
| Tier 3 | Abilities, Knowledge, Work Context | Optional |
Then create your AI Wins Dashboard template (a simple spreadsheet with columns: Job Title, Time Saved, Quality Delta, Next Pilot) and export it as a PDF.
Set Up the GPT
Name: AI Task Augmentation Analyst
Description: Analyzes job descriptions to assess AI task automation and generate dashboards.
Instructions: Paste the full system prompt (v3 — the latest version from the Iterate & Deploy section)
Upload Knowledge Files
Upload in this order:
- O*NET data files (Tier 1 first, then Tier 2)
- AI Wins Dashboard template (PDF)
- Example analysis file
Configure Conversation Starters
Enable Capabilities
Test It
Paste this sample job description and verify the output:
Expected Output Should Include:
- SOC code match with confidence level
- 6 tasks scored on Automation Risk and Strategic Moat
- Each task classified as Automate, Augment, or Human-led
- An AI Wins Dashboard row showing ~8-10 hrs/week saved
- A pilot recommendation for AI-assisted blog drafting
Quick Reference Checklist
Use this checklist when building any custom GPT:
Phase 1: Define
- ☐ Articulated the specific decision this tool helps make
- ☐ Identified the target user(s)
- ☐ Defined expected input format
- ☐ Designed output format (sketch tables/dashboards first)
- ☐ Listed what the tool should NOT do
Phase 2: Data
- ☐ Identified authoritative data source(s)
- ☐ Created a tiered priority for file uploads
- ☐ Built a data fallback chain
Phase 3: Prompt
- ☐ Wrote Identity & Purpose (2-3 sentences)
- ☐ Defined step-by-step workflow
- ☐ Created explicit scoring rubrics
- ☐ Specified exact output formats with headers
- ☐ Added uncertainty handling and clarification triggers
Phase 4: Iterate
- ☐ Tested with standard and ambiguous inputs
- ☐ Compressed prompt (extracted examples, condensed rubrics)
- ☐ Added actionability (recommendations, next steps)
- ☐ Had at least one other person test it
Phase 5: Package
- ☐ Set name and description
- ☐ Uploaded knowledge files in priority order
- ☐ Added conversation starters
- ☐ Enabled appropriate capabilities
- ☐ Created supporting materials (cheat sheet, template, example)
Phase 6: Test & Refine
- ☐ Ran 5+ diverse inputs through the tool
- ☐ Verified output consistency across runs
- ☐ Gathered user feedback
- ☐ Incorporated feedback into next iteration
Key Takeaways
- ✓Your prompt and data are 90% portable — the last 10% is platform-specific packaging
- ✓Choose your platform based on strengths — GPTs for sharing, Claude for deep analysis, Gemini for Google Workspace integration
- ✓Where conversation starters aren't supported, embed example prompts directly in your instructions
- ✓Always test with the sample job description — verify your output matches the expected format before sharing
- ✓Use the checklist — it's your quality gate for any custom GPT you build