Developers around the world now have access to Google’s latest AI model, Gemini 3 Pro, through GitHub Copilot — but with major limitations. The public preview, confirmed in GitHub’s official documentation as of November 20, 2025, marks the first time the model is available outside Google’s internal systems. Yet here’s the twist: you can’t use its Agent, Ask, or Edit features. That means no conversational coding help, no natural language edits, no autonomous refactoring — just raw code generation. For developers used to Copilot’s smart suggestions, this feels like getting a Ferrari with the engine disabled.
What’s actually available — and what’s not
The GitHub documentation, last updated at 12:53 PM UTC on November 20, 2025, lists Gemini 3 Pro alongside 11 other AI models from four providers. It’s a crowded field. OpenAI’s GPT-5 and GPT-4.1 are still in General Availability, as is Gemini 2.5 Pro, Google’s previous flagship. But Gemini 3 Pro is stuck in preview mode — a testing ground with no roadmap. What’s more, the model shows up with a cryptic "1" in the first column of GitHub’s compatibility matrix and "Not applicable" in the second. No explanation. No documentation. No hint as to whether this refers to model version, priority, or some internal flag. GitHub doesn’t say. And that’s the problem: this isn’t a product launch. It’s a quiet test.The October 23 migration: A pattern of silent updates
This isn’t the first time GitHub has quietly reshuffled AI models. On October 23, 2025, it executed a sweeping update that retired Gemini 2.0 Flash and mapped it to Gemini 2.5 Pro. Simultaneously, it upgraded Claude Sonnet 3.7 to Sonnet 4, o1-mini to GPT-5 mini, and several other models. The pattern is clear: GitHub treats AI model updates like software patches — silent, coordinated, and rarely announced. Yet Gemini 3 Pro didn’t join that migration. It appeared later — not as an upgrade, but as a parallel experiment. That suggests Google and GitHub aren’t syncing their release calendars. Or worse, Google isn’t ready to fully commit. The lack of mode support hints at incomplete integration — or perhaps internal resistance at Google to open up its most advanced model too soon.
Who’s really winning the AI coding race?
OpenAI still leads in functionality. GPT-5 and GPT-4.1 support full Agent and Edit modes. So does Claude Sonnet 4, from Anthropic (though Anthropic isn’t named in the docs, the model name gives it away). Even xAI’s Grok Code Fast 1 — Elon Musk’s lesser-known contender — is fully functional. Google’s offering? Barely functional. The absence of context window specs, training data cutoffs, or language support details in GitHub’s docs is telling. They’re not sharing. Not even the basics. Meanwhile, OpenAI and Anthropic are transparent about their model capabilities — even in public previews. Google’s silence feels strategic, not technical.Why this matters to developers
For coders, this isn’t just about which AI is faster. It’s about workflow. Agent mode lets AI handle entire tasks — like updating a legacy API or writing tests. Ask mode lets you question your code like a senior dev. Edit mode lets you say, “Make this function thread-safe,” and it just does it. Without those, Gemini 3 Pro is just an autocomplete with more parameters. And that’s a problem. If Google’s best model can’t integrate fully into the world’s most popular coding assistant, what’s the point? Developers might be curious. But they won’t switch workflows for a half-baked tool.
What’s next? The waiting game
GitHub hasn’t said when — or even if — Gemini 3 Pro will move from preview to General Availability. No timeline. No roadmap. No public commitment from either Google or Microsoft (GitHub’s parent company). That’s unusual. Most tech companies announce previews with clear expectations: "We expect GA in Q1" or "Beta ends in December." This silence could mean one of three things: Google’s model still has stability issues, Microsoft’s integration team is struggling to adapt, or Google is holding back to time a bigger announcement — perhaps tied to its own IDE or developer platform. Whatever the reason, developers are left guessing. The real story here isn’t that Gemini 3 Pro is available. It’s that its availability is so restricted. In a world where AI coding tools are becoming essential, Google’s cautious approach may cost it developer trust — and market share — faster than any bug in its codebase.Frequently Asked Questions
Can I use Gemini 3 Pro for complex coding tasks in GitHub Copilot right now?
No. Despite being in public preview, Gemini 3 Pro has Agent mode, Ask mode, and Edit mode disabled. You can only use it for basic code completion — similar to older autocomplete tools. It won’t refactor code, explain logic, or generate tests. For full functionality, developers should stick with GPT-5 or Claude Sonnet 4.
Why didn’t Gemini 3 Pro launch alongside the October 23, 2025 model migration?
The October 23 update focused on replacing deprecated models like Gemini 2.0 Flash with newer versions, not introducing new ones. Gemini 3 Pro appeared separately in November, suggesting Google and GitHub aren’t aligned on release timing. This delay may reflect integration challenges or strategic delays from Google’s side.
Is Gemini 3 Pro faster or better than GPT-5?
There’s no public data to compare performance. GitHub’s documentation provides no benchmarks, context window sizes, or training data cutoffs for Gemini 3 Pro. While Google claims its models are state-of-the-art, without functional access to key features, developers can’t verify those claims in real-world use.
Will Gemini 3 Pro eventually get full features in Copilot?
Unknown. GitHub hasn’t indicated any timeline for enabling Agent, Ask, or Edit modes. Google has also remained silent. Given that Gemini 2.5 Pro has been stable since October, the lack of progress suggests either technical hurdles or a deliberate decision to limit Gemini 3 Pro’s role — perhaps to protect Google’s own developer tools.
Who else is competing with Google in GitHub Copilot’s AI model lineup?
OpenAI (GPT-5, GPT-4.1), Anthropic (Claude Sonnet 4), and xAI (Grok Code Fast 1) all offer fully functional models. OpenAI leads in feature completeness, while Claude Sonnet 4 is noted for reasoning accuracy. Even Musk’s xAI has more transparent integration than Google. The competition is fierce — and Google is falling behind in usability, not just technology.
Does this mean Google is losing the AI coding war?
Not necessarily — but they’re losing developer mindshare. Google has powerful models, but if they can’t deliver them in a usable way through the platform developers trust most, technical superiority doesn’t matter. The real battle isn’t about parameters — it’s about seamless integration. Right now, Google’s missing the mark.