Cortex Code skills, 90 days in: plugins, not prompts
Cortex Code stopped being “just another AI feature” the moment skills landed. In under three months they’ve become the missing glue: portable, versioned building blocks that turn NPO data stacks into repeatable agent workflows.
TL/DR: Cortex Code went GA in Snowsight in March. By May, the community is shipping skills the way the dbt crowd used to ship packages, Snowflake decoupled the CLI from Snowflake compute, and somewhere on r/LLMDevs a guy is reminding everyone that an agent is, fundamentally, thirty lines of bash. Those three things are the same story.
I have been watching this corner of the ecosystem closely since I first wrote about Cortex Code back in February, when it still felt like a curiosity. Three months later it is something else: a plugin host. The gravitational pull of that shift is, in my view, still underrated.

The 90-day timeline, briefly
Three dates matter:
- 2026-02-06: the data science and ML skill goes into preview, alongside what is now a 40+ catalog of bundled skills.
- 2026-02-23: Cortex Code CLI announces standalone pricing not coupled to Snowflake compute, and becomes a first‑class way to talk to dbt projects and Airflow DAGs.
- 2026-03-09: Cortex Code in Snowsight hits GA.
Read those three together and the product is no longer “an AI coding agent that lives in Snowflake.” It is a CLI host with a skill marketplace and a plugin manifest format that just happens to know a lot about Snowflake by default. That is a very different thing to compete with.
What a skill actually is
A skill is a directory with a SKILL.md file in it. The file contains the instructions Cortex Code (or other coding agents for that matter) should follow when the skill is activated, plus optional examples and templates. When wrapped into a plugin manifest, a skill can also bundle MCP servers, hooks, and slash commands.
The minimal version looks something like this:
---
name: build-semantic-model
description: Generate a Cortex Analyst semantic model YAML from raw tables
activate_on:
- "generate semantic model"
- "semantic_model.yaml"
---
# Instructions
1. Ask for the source schema and the analytic grain.
2. Introspect column names and statistics with `INFORMATION_SCHEMA`.
3. Classify columns as dimensions vs. measures.
4. Emit a `semantic_model.yaml` that Cortex Analyst can register.
# Examples
...
That’s it. The whole “skill” is a markdown file with a YAML front matter and a few examples. Cortex Code reads it, decides when to activate it, and injects it into the prompt. The CLI bundles 40+ of these out of the box, and anyone can add their own under ~/.cortex-code/skills/ or as part of a plugin.
A plugin is the next wrapper up: a single manifest that bundles skills, subagents, slash commands, hooks, and MCP servers. Plugins can be installed from the marketplace, pulled from a Git repo, or shipped as part of a Snowflake connection profile. For example, a “cost guardrails” plugin could ship a billing skill, a governance auditor, and a slash command that runs your monthly spend review.
The community skill that made me look up
Yesterday on r/snowflake there was a post about a custom skill that takes a raw schema and generates a Cortex Analyst semantic model. Point it at 15 raw tables, wait 30 minutes, get back a semantic_model.yaml that is 80% of the way to "business users asking questions in plain English." They asked, very politely, whether anyone would want it open sourced.
Plot twist: most of the comments were “RemindMe! 7 days.” which is Reddit‑speak for “yes, obviously, please.”
I built a Cortex Code Skill that generates semantic models from raw tables in 30 mins, should I open source it?
by u/rahulsahay123 in snowflake
One reply stood out: “I’d describe how your skill is better or worse than Semantic View Autopilot or the native semantic view skills Snowflake has in CLI.” That is exactly the right question to ask in a marketplace that is 90 days old. Snowflake itself is shipping the same primitives. The interesting differentiation is in defaults: column descriptions, business‑friendly metric naming, opinionated grains, and, frankly, how to incorporate any pre‑existing ontology already built on those raw tables. The kind of thing that is hard to ship as a generic feature and easy to ship as a custom skill.

That asymmetry is exactly how the dbt package ecosystem grew in 2019. dbt‑utils did the boring things that should have been in core but were not, and within a year nobody started a project without it. Skills are now positioned to do the same thing on the agent side. The dbt Snowflake Native App now ships inside Snowsight; the lines keep blurring.
The 30-line counterargument
I would feel naive writing all of this without naming the elephant in the room. Two days before that post, r/LLMDevs had a thread called “agentic harness in 30 lines of code.” It is exactly what it sounds like: a tiny JavaScript loop that calls a chat completions endpoint, dispatches on a handful of tools, appends results to the history, and repeats. 68 upvotes, 31 comments, mostly in agreement.
agentic harness in 30 lines of code
by u/Everlier in LLMDevs
The top comment asks the obvious question: why even keep read and write? Bash already does both. "Bash is all you need." This is the right kind of skepticism to hold next to a marketplace pitch. The agentic loop is indeed small. But the reason Cortex Code is interesting is the defaults, not the loop. It is the connection profile that already knows where my warehouses live, the bundled skills that already know how Snowflake billing works, the governance hooks that already know which roles can call which functions, and the fact that a custom skill is a markdown file rather than a 400‑line LangGraph DAG.
In other words: thirty lines of bash is correct and also not the product. The product is the inventory of opinionated defaults that ship with it, plus the social fabric (skill marketplace, plugin manifest, GitHub-installable bundles) that lets the community add more.
dbt packages, but for the agent layer
The dbt package analogy is, i.m.h.o., the most useful frame I have found for this. Packages worked because:
- The base tool (dbt) had narrow, opinionated defaults.
- The package format (a sub-folder of macros and models, declared in
packages.yml) was almost trivially simple. - Distribution was Git-based, so anyone could publish without a vendor's permission.
- The first few essential packages (dbt-utils, dbt-codegen, audit-helper) were so obviously useful that not installing them looked weird within a year.
Cortex Code skills tick all four boxes. SKILL.md is even simpler than a dbt macro. Plugins install from Git. And the obvious must-have skills are visibly being built right now in public: semantic-model generators, governance auditors, cost-attribution skills.

But there is, of course, a catch. dbt packages are free, run inside your warehouse, and produce SQL artifacts that survive whatever tool you used to generate them. Skills are markdown instructions that produce prompts, and prompts produce non‑deterministic model output. A “bad” dbt package fails loudly. A “bad” skill quietly hallucinates a metric definition. The QA story for community skills is the open problem of the next 90 days. This is exactly where the “shadow AI validation team” that r/analytics keeps complaining about will end up earning its keep.
What I am watching for next
Three things, in order of how soon I expect to see them:
- An informal “community skills” registry, probably on GitHub before it lives in the official marketplace, with the usual ranking‑by‑stars dynamics.
- A first wave of skill QA tooling (golden‑output tests, prompt regressions, semantic diffing of
semantic_model.yaml). This layer does not exist yet; it is the same gap dbt‑tests filled in 2020. - Pricing creep. Skill authors will eventually want to charge, and the marketplace will need a billing story. I would not bet on this in the next 90 days, but I would bet on it before Summit 2027.
From my chair, the cleanest way to track this is to just keep watching r/snowflake. The marketplace will go through the same growing pains the dbt hub did, and most of those growing pains will be aired in public.
And that’s my 2 cents on what happened over the past weekend: agentic loops are thirty lines, skills are markdown files, and the marketplace is ninety days old. The next interesting move is whoever publishes the first dbt‑utils‑equivalent Cortex Code plugin and gets ten thousand installs out of it. 🤓
If you are already hacking on Cortex Code skills, I would love to hear what you are automating and how you are thinking about testing them. 😎
