diff --git a/content/cookbooks/ai-chat-app/intro.md b/content/cookbooks/ai-chat-app/intro.md index c50634c..2c02796 100644 --- a/content/cookbooks/ai-chat-app/intro.md +++ b/content/cookbooks/ai-chat-app/intro.md @@ -4,7 +4,7 @@ A streaming AI chat app on Databricks: a user sends a message, the server authen ### How the steps fit together -Work through the steps in the order below. Each one adds one concrete piece; by the end you have a deployable app. +Work through the steps in the order below. Each one adds one concrete piece; by the end you have a deployable app. Your installed Databricks agent skills provide the implementation patterns for each step. 1. **Spin Up a Databricks App** — scaffold a fresh AppKit Databricks App with `databricks apps init` (the meta-prompt above already verifies the CLI profile via [Set Up Your Local Dev Environment](/templates/set-up-your-local-dev-environment)). 2. **Query AI Gateway Endpoints** — pick a chat model (e.g. `databricks-gpt-5-4-mini`) and wire up `createOpenAI()` with the AI Gateway base URL. diff --git a/content/dev-guidelines.md b/content/dev-guidelines.md index 999be2c..3a0a75d 100644 --- a/content/dev-guidelines.md +++ b/content/dev-guidelines.md @@ -8,6 +8,10 @@ Follow these rules every time you act on a DevHub prompt. - Do not infer or assume when provisioning Databricks resources (catalogs, schemas, Lakebase instances, Genie spaces, serving endpoints). Ask the user whether to create new resources or reuse existing ones. - If you run into trouble, fetch additional templates and docs from https://dev.databricks.com (the index lives at https://dev.databricks.com/llms.txt). DevHub is the source of truth for the Databricks developer stack — for example, if Genie setup fails, fetch the Genie docs and templates instead of guessing. +## Implementation guidance + +The template below describes **what to build**, not how to build it. Your installed Databricks agent skills contain all implementation patterns, code examples, CLI commands, and configuration needed. Use them to implement the template. If skills are not installed yet, run `databricks experimental aitools install --project` first (covered in the local-bootstrap step below). + ## Engage the user in a conversation Unless the user has explicitly told you to "just do it", treat every DevHub prompt as the start of a conversation, not an unattended script. The user knows their domain best; DevHub knows the Databricks stack. Both are required to build a successful system. diff --git a/content/examples/inventory-intelligence/content.md b/content/examples/inventory-intelligence/content.md index 0109062..e95119f 100644 --- a/content/examples/inventory-intelligence/content.md +++ b/content/examples/inventory-intelligence/content.md @@ -2,38 +2,6 @@ This template builds a full retail inventory management system on the Databricks stack: a React app where store managers monitor stock health, review AI-generated replenishment recommendations, and approve purchase orders — all powered by a live medallion pipeline and pluggable demand forecast job. -### Setup — interview the user - -Before doing anything else, ask the user these questions **one at a time**. Wait for each answer before asking the next. Use the answers to configure `databricks.yml`, the seed scripts, and the deploy commands. - -1. **Databricks workspace URL** — ask: "What is your Databricks workspace URL? (e.g. `https://dbc-xxxx.cloud.databricks.com` — run `databricks auth env` to find it)" -2. **CLI profile** — ask: "Which Databricks CLI profile should I use? (run `databricks auth profiles` to list them; press Enter to use `DEFAULT`)" -3. **Unity Catalog catalog name** — ask: "What is your Unity Catalog catalog name? The pipeline will write silver and gold Delta tables there (e.g. `my_catalog`)" -4. **SQL Warehouse ID** — ask: "What is your SQL Warehouse ID? (run `databricks warehouses list --output json` or find it in the warehouse settings URL — if you don't have one, I can create a serverless warehouse for you)" -5. **Lakebase** — ask: "Do you already have a Lakebase project and database set up? If yes, share the branch resource name (e.g. `projects/my-project/branches/production`) and database resource name. If no, I'll walk you through creating one." -6. **Data mode** — ask: "Do you want demo data (5 stores, controlled stock scenarios, great for demos) or realistic randomized data seeded from scratch?" -7. **Genie analytics tab** — ask: "Do you want the optional AI/BI Genie chat tab in the app? (If yes, the Genie space will be created automatically — this requires running the sample data pipeline first: data generator → DLT analytics → forecast job, ~10–15 min. This happens as part of the deploy.)" -8. **Demand forecast model** — ask: "Which demand forecast model would you like? Options: `weighted_moving_average` (default, no extra infra), `exponential_smoothing`, `prophet`, or `model_serving` (requires a Model Serving endpoint)" - -Once all answers are collected: - -1. Update `databricks.yml` — set `workspace.host`, `sql_warehouse_id`, `postgres_branch`, `postgres_database`, `catalog`, `forecast_model` in the appropriate target(s). -2. Run the deploy: - - **Randomized data** (with or without Genie): `./deploy.sh --profile --target full --sample-data` - - **Demo data without Genie**: `./deploy.sh --profile --target demo` - - **Demo data with Genie**: run `--target full --sample-data` first (creates the DLT pipeline and UC gold tables Genie needs), then `./deploy.sh --profile --target demo` to load controlled demo data and wire up the Genie space -3. `deploy.sh` handles Genie automatically: it checks whether UC gold tables exist, runs the sample data pipeline if not, creates the Genie space, patches `databricks.yml` with the new space ID (in the correct target section), and redeploys with the Genie resource bound. - -**If the user needs a new SQL Warehouse**, create a serverless one: - -```bash -databricks warehouses create --profile --name "inventory-intelligence" \ - --cluster-size Small --auto-stop-mins 30 --max-num-clusters 1 \ - --enable-serverless-compute -``` - -Use the `id` from the response as the warehouse ID. - ### Data Flow Sales and stock data flow from Lakebase Postgres through the lakehouse, get enriched by a demand forecast model, and are served back to the app through reverse sync: diff --git a/content/intent-cookbook.md b/content/intent-cookbook.md index b3804c8..8225735 100644 --- a/content/intent-cookbook.md +++ b/content/intent-cookbook.md @@ -9,6 +9,7 @@ Your job in this conversation is to: 1. Clarify the user's **goal for this archetype** — production app, learning project, or demo. 2. Verify the local Databricks dev environment is ready (block below). 3. Walk the user through the cookbook section by section, asking the questions each section surfaces, and stitching the included recipes together coherently. +4. When the cookbook content and your installed Databricks agent skills cover the same topic, **treat the skills as the source of truth** for implementation patterns, CLI commands, and code. The cookbook provides context and scope; the skills provide the authoritative how-to. ## Step 1 — Clarify intent before touching code diff --git a/content/intent-example.md b/content/intent-example.md index f7212d8..441e2f4 100644 --- a/content/intent-example.md +++ b/content/intent-example.md @@ -9,6 +9,7 @@ Your job in this conversation is to: 1. Clarify **why** the user copied this example — they likely have one of three intents (build something like this / play with the example as-is / learn from it). Adapt to whichever it is. 2. Verify the local Databricks dev environment is ready (block below). 3. Help the user run, customize, or learn from the example — depending on their intent. +4. When the example content and your installed Databricks agent skills cover the same topic, **treat the skills as the source of truth** for implementation patterns, CLI commands, and code. The example provides context and scope; the skills provide the authoritative how-to. ## Step 1 — Clarify intent before touching code diff --git a/content/intent-recipe.md b/content/intent-recipe.md index b34cc1d..145cadf 100644 --- a/content/intent-recipe.md +++ b/content/intent-recipe.md @@ -9,6 +9,7 @@ Your job in this conversation is to: 1. Clarify whether the user is **integrating this recipe into an existing project** or **starting fresh from scratch**, and adapt accordingly. 2. Verify the local Databricks dev environment is ready (block below). 3. Walk the user through the recipe step by step, asking the questions the recipe itself surfaces. +4. When the recipe content and your installed Databricks agent skills cover the same topic, **treat the skills as the source of truth** for implementation patterns, CLI commands, and code. The recipe provides context and scope; the skills provide the authoritative how-to. ## Step 1 — Clarify intent before touching code diff --git a/src/lib/examples/build-example-markdown.ts b/src/lib/examples/build-example-markdown.ts index bbe24db..8ddbbd6 100644 --- a/src/lib/examples/build-example-markdown.ts +++ b/src/lib/examples/build-example-markdown.ts @@ -93,37 +93,13 @@ export function buildFullPrompt( if (isInitCommand(example.initCommand)) { const hasPrereqs = Boolean(sections.prerequisites); const hasDeployBlock = Boolean(sections.deployment); - const initStepNumber = hasPrereqs ? 3 : 2; - - lines.push( - "### 1. Verify Databricks CLI auth", - "", - "The init flow calls the workspace API to resolve connection details, so it fails immediately without a valid Databricks CLI profile. Before running init, check auth:", - "", - "```bash", - "databricks auth profiles", - "```", - "", - "If no profile shows `Valid: YES`, authenticate one first:", - "", - "```bash", - "databricks auth login --profile --host ", - "```", - "", - "If `DEFAULT` is not the profile you want to use, export the one you want so subsequent commands pick it up:", - "", - "```bash", - "export DATABRICKS_CONFIG_PROFILE=", - "```", - "", - ); if (hasPrereqs) { lines.push(sections.prerequisites!, ""); } lines.push( - `### ${initStepNumber}. Scaffold the project with \`databricks apps init\``, + "### Scaffold the project", "", "Run the command below to scaffold this example into a new directory using the [AppKit template system](/docs/appkit/v0/development/templates). It creates the app in your workspace, binds required resources, and writes a local `.env` with connection details resolved by the AppKit plugins.", "", @@ -143,7 +119,7 @@ export function buildFullPrompt( } } else { lines.push( - "### 1. Clone locally and follow `template/README.md`", + "### Clone and follow `template/README.md`", "", "Run the command below to clone the DevHub repository locally and enter this example's **`template/`** directory.", "", diff --git a/tests/build-example-markdown.test.ts b/tests/build-example-markdown.test.ts index 44f5065..94e518e 100644 --- a/tests/build-example-markdown.test.ts +++ b/tests/build-example-markdown.test.ts @@ -93,18 +93,12 @@ describe("buildFullPrompt", () => { test("includes get started steps", () => { const prompt = buildFullPrompt({ ...baseOpts, sections: emptySections }); expect(prompt).toContain("## Get started"); - expect(prompt).toContain( - "### 1. Clone locally and follow `template/README.md`", - ); + expect(prompt).toContain("### Clone and follow `template/README.md`"); expect(prompt).toContain(minimalExample.initCommand); expect(prompt).toContain("template/README.md"); expect(prompt).toContain( "databricks apps init --template https://github.com/databricks/devhub/tree/main/examples/test-example", ); - expect(prompt).not.toContain( - "### 2. Provision or link existing Databricks resources", - ); - expect(prompt).not.toContain("### 3. Deploy the application"); }); test("includes content section body", () => { @@ -122,7 +116,7 @@ describe("buildFullPrompt", () => { ...baseOpts, sections: contentOnlySections, }); - const getStartedIdx = prompt.indexOf("### 1. Clone locally"); + const getStartedIdx = prompt.indexOf("### Clone and follow"); const rawIdx = prompt.indexOf("This is the example overview"); const sourceIdx = prompt.indexOf("## Source Code"); expect(getStartedIdx).toBeLessThan(rawIdx); @@ -257,14 +251,12 @@ describe("example Get started: full prompt (Copy prompt) vs export markdown (Cop const full = buildFullPrompt({ ...baseOpts, sections: emptySections }); const exportMd = buildAdditionalMarkdown(baseOpts); - expect(full).toContain( - "### 1. Clone locally and follow `template/README.md`", - ); + expect(full).toContain("### Clone and follow `template/README.md`"); expect(full).toContain("```bash"); expect(full).toContain(minimalExample.initCommand); expect(exportMd).toContain(buildExportGetStartedSection(minimalExample)); - expect(exportMd).not.toContain("### 1. Clone locally"); + expect(exportMd).not.toContain("### Clone and follow"); expect(exportMd).toContain("```bash"); expect(exportMd).toContain(minimalExample.initCommand); }); @@ -312,30 +304,21 @@ describe("init-style examples (databricks apps init)", () => { expect(authIdx).toBeLessThan(initIdx); }); - test("full prompt has auth verification as step 1 before scaffold step 2", () => { + test("full prompt has scaffold section without duplicate auth verification", () => { const prompt = buildFullPrompt({ ...initOpts, sections: emptySections }); - expect(prompt).toContain("### 1. Verify Databricks CLI auth"); - expect(prompt).toContain( - "### 2. Scaffold the project with `databricks apps init`", - ); - expect(prompt).toContain("databricks auth profiles"); - expect(prompt).toContain("databricks auth login --profile"); - expect(prompt).not.toContain( - "### 1. Clone locally and follow `template/README.md`", - ); + expect(prompt).toContain("### Scaffold the project"); + expect(prompt).not.toContain("Verify Databricks CLI auth"); + expect(prompt).not.toContain("databricks auth profiles"); + expect(prompt).not.toContain("databricks auth login --profile"); + expect(prompt).not.toContain("### Clone and follow `template/README.md`"); expect(prompt).toContain(initExample.initCommand); - const authIdx = prompt.indexOf("### 1. Verify Databricks CLI auth"); - const scaffoldIdx = prompt.indexOf( - "### 2. Scaffold the project with `databricks apps init`", - ); - expect(authIdx).toBeLessThan(scaffoldIdx); }); - test("prereq section is injected as step 2 and scaffold becomes step 3", () => { + test("prereq section is injected before scaffold", () => { const sections: ExampleSections = { content: "", prerequisites: [ - "### 2. Create the Lakebase Postgres prerequisites", + "### Create the Lakebase Postgres prerequisites", "", "```bash", "databricks postgres create-project my-proj", @@ -346,20 +329,13 @@ describe("init-style examples (databricks apps init)", () => { ...initOpts, sections, }); - expect(prompt).toContain("### 1. Verify Databricks CLI auth"); - expect(prompt).toContain( - "### 2. Create the Lakebase Postgres prerequisites", - ); - expect(prompt).toContain( - "### 3. Scaffold the project with `databricks apps init`", - ); + expect(prompt).toContain("### Create the Lakebase Postgres prerequisites"); + expect(prompt).toContain("### Scaffold the project"); expect(prompt).toContain("databricks postgres create-project my-proj"); const prereqIdx = prompt.indexOf( - "### 2. Create the Lakebase Postgres prerequisites", - ); - const scaffoldIdx = prompt.indexOf( - "### 3. Scaffold the project with `databricks apps init`", + "### Create the Lakebase Postgres prerequisites", ); + const scaffoldIdx = prompt.indexOf("### Scaffold the project"); expect(prereqIdx).toBeLessThan(scaffoldIdx); }); @@ -367,7 +343,7 @@ describe("init-style examples (databricks apps init)", () => { const sections: ExampleSections = { content: "", deployment: [ - "### 3. Install and deploy", + "### Install and deploy", "", "```bash", "npm install", @@ -379,7 +355,7 @@ describe("init-style examples (databricks apps init)", () => { ...initOpts, sections, }); - expect(prompt).toContain("### 3. Install and deploy"); + expect(prompt).toContain("### Install and deploy"); expect(prompt).toContain("npm run deploy"); expect(prompt).not.toContain( "A **`README.md`** ships inside the scaffolded project",