Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/cookbooks/ai-chat-app/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ A streaming AI chat app on Databricks: a user sends a message, the server authen

### How the steps fit together

Work through the steps in the order below. Each one adds one concrete piece; by the end you have a deployable app.
Work through the steps in the order below. Each one adds one concrete piece; by the end you have a deployable app. Your installed Databricks agent skills provide the implementation patterns for each step.

1. **Spin Up a Databricks App** — scaffold a fresh AppKit Databricks App with `databricks apps init` (the meta-prompt above already verifies the CLI profile via [Set Up Your Local Dev Environment](/templates/set-up-your-local-dev-environment)).
2. **Query AI Gateway Endpoints** — pick a chat model (e.g. `databricks-gpt-5-4-mini`) and wire up `createOpenAI()` with the AI Gateway base URL.
Expand Down
4 changes: 4 additions & 0 deletions content/dev-guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@ Follow these rules every time you act on a DevHub prompt.
- Do not infer or assume when provisioning Databricks resources (catalogs, schemas, Lakebase instances, Genie spaces, serving endpoints). Ask the user whether to create new resources or reuse existing ones.
- If you run into trouble, fetch additional templates and docs from https://dev.databricks.com (the index lives at https://dev.databricks.com/llms.txt). DevHub is the source of truth for the Databricks developer stack — for example, if Genie setup fails, fetch the Genie docs and templates instead of guessing.

## Implementation guidance

The template below describes **what to build**, not how to build it. Your installed Databricks agent skills contain all implementation patterns, code examples, CLI commands, and configuration needed. Use them to implement the template. If skills are not installed yet, run `databricks experimental aitools install --project` first (covered in the local-bootstrap step below).

## Engage the user in a conversation

Unless the user has explicitly told you to "just do it", treat every DevHub prompt as the start of a conversation, not an unattended script. The user knows their domain best; DevHub knows the Databricks stack. Both are required to build a successful system.
Expand Down
32 changes: 0 additions & 32 deletions content/examples/inventory-intelligence/content.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,38 +2,6 @@

This template builds a full retail inventory management system on the Databricks stack: a React app where store managers monitor stock health, review AI-generated replenishment recommendations, and approve purchase orders — all powered by a live medallion pipeline and pluggable demand forecast job.

### Setup — interview the user

Before doing anything else, ask the user these questions **one at a time**. Wait for each answer before asking the next. Use the answers to configure `databricks.yml`, the seed scripts, and the deploy commands.

1. **Databricks workspace URL** — ask: "What is your Databricks workspace URL? (e.g. `https://dbc-xxxx.cloud.databricks.com` — run `databricks auth env` to find it)"
2. **CLI profile** — ask: "Which Databricks CLI profile should I use? (run `databricks auth profiles` to list them; press Enter to use `DEFAULT`)"
3. **Unity Catalog catalog name** — ask: "What is your Unity Catalog catalog name? The pipeline will write silver and gold Delta tables there (e.g. `my_catalog`)"
4. **SQL Warehouse ID** — ask: "What is your SQL Warehouse ID? (run `databricks warehouses list --output json` or find it in the warehouse settings URL — if you don't have one, I can create a serverless warehouse for you)"
5. **Lakebase** — ask: "Do you already have a Lakebase project and database set up? If yes, share the branch resource name (e.g. `projects/my-project/branches/production`) and database resource name. If no, I'll walk you through creating one."
6. **Data mode** — ask: "Do you want demo data (5 stores, controlled stock scenarios, great for demos) or realistic randomized data seeded from scratch?"
7. **Genie analytics tab** — ask: "Do you want the optional AI/BI Genie chat tab in the app? (If yes, the Genie space will be created automatically — this requires running the sample data pipeline first: data generator → DLT analytics → forecast job, ~10–15 min. This happens as part of the deploy.)"
8. **Demand forecast model** — ask: "Which demand forecast model would you like? Options: `weighted_moving_average` (default, no extra infra), `exponential_smoothing`, `prophet`, or `model_serving` (requires a Model Serving endpoint)"

Once all answers are collected:

1. Update `databricks.yml` — set `workspace.host`, `sql_warehouse_id`, `postgres_branch`, `postgres_database`, `catalog`, `forecast_model` in the appropriate target(s).
2. Run the deploy:
- **Randomized data** (with or without Genie): `./deploy.sh --profile <profile> --target full --sample-data`
- **Demo data without Genie**: `./deploy.sh --profile <profile> --target demo`
- **Demo data with Genie**: run `--target full --sample-data` first (creates the DLT pipeline and UC gold tables Genie needs), then `./deploy.sh --profile <profile> --target demo` to load controlled demo data and wire up the Genie space
3. `deploy.sh` handles Genie automatically: it checks whether UC gold tables exist, runs the sample data pipeline if not, creates the Genie space, patches `databricks.yml` with the new space ID (in the correct target section), and redeploys with the Genie resource bound.

**If the user needs a new SQL Warehouse**, create a serverless one:

```bash
databricks warehouses create --profile <profile> --name "inventory-intelligence" \
--cluster-size Small --auto-stop-mins 30 --max-num-clusters 1 \
--enable-serverless-compute
```

Use the `id` from the response as the warehouse ID.

### Data Flow

Sales and stock data flow from Lakebase Postgres through the lakehouse, get enriched by a demand forecast model, and are served back to the app through reverse sync:
Expand Down
1 change: 1 addition & 0 deletions content/intent-cookbook.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Your job in this conversation is to:
1. Clarify the user's **goal for this archetype** — production app, learning project, or demo.
2. Verify the local Databricks dev environment is ready (block below).
3. Walk the user through the cookbook section by section, asking the questions each section surfaces, and stitching the included recipes together coherently.
4. When the cookbook content and your installed Databricks agent skills cover the same topic, **treat the skills as the source of truth** for implementation patterns, CLI commands, and code. The cookbook provides context and scope; the skills provide the authoritative how-to.

## Step 1 — Clarify intent before touching code

Expand Down
1 change: 1 addition & 0 deletions content/intent-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Your job in this conversation is to:
1. Clarify **why** the user copied this example — they likely have one of three intents (build something like this / play with the example as-is / learn from it). Adapt to whichever it is.
2. Verify the local Databricks dev environment is ready (block below).
3. Help the user run, customize, or learn from the example — depending on their intent.
4. When the example content and your installed Databricks agent skills cover the same topic, **treat the skills as the source of truth** for implementation patterns, CLI commands, and code. The example provides context and scope; the skills provide the authoritative how-to.

## Step 1 — Clarify intent before touching code

Expand Down
1 change: 1 addition & 0 deletions content/intent-recipe.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Your job in this conversation is to:
1. Clarify whether the user is **integrating this recipe into an existing project** or **starting fresh from scratch**, and adapt accordingly.
2. Verify the local Databricks dev environment is ready (block below).
3. Walk the user through the recipe step by step, asking the questions the recipe itself surfaces.
4. When the recipe content and your installed Databricks agent skills cover the same topic, **treat the skills as the source of truth** for implementation patterns, CLI commands, and code. The recipe provides context and scope; the skills provide the authoritative how-to.

## Step 1 — Clarify intent before touching code

Expand Down
28 changes: 2 additions & 26 deletions src/lib/examples/build-example-markdown.ts
Original file line number Diff line number Diff line change
Expand Up @@ -93,37 +93,13 @@ export function buildFullPrompt(
if (isInitCommand(example.initCommand)) {
const hasPrereqs = Boolean(sections.prerequisites);
const hasDeployBlock = Boolean(sections.deployment);
const initStepNumber = hasPrereqs ? 3 : 2;

lines.push(
"### 1. Verify Databricks CLI auth",
"",
"The init flow calls the workspace API to resolve connection details, so it fails immediately without a valid Databricks CLI profile. Before running init, check auth:",
"",
"```bash",
"databricks auth profiles",
"```",
"",
"If no profile shows `Valid: YES`, authenticate one first:",
"",
"```bash",
"databricks auth login --profile <name> --host <workspace-url>",
"```",
"",
"If `DEFAULT` is not the profile you want to use, export the one you want so subsequent commands pick it up:",
"",
"```bash",
"export DATABRICKS_CONFIG_PROFILE=<profile>",
"```",
"",
);

if (hasPrereqs) {
lines.push(sections.prerequisites!, "");
}

lines.push(
`### ${initStepNumber}. Scaffold the project with \`databricks apps init\``,
"### Scaffold the project",
"",
"Run the command below to scaffold this example into a new directory using the [AppKit template system](/docs/appkit/v0/development/templates). It creates the app in your workspace, binds required resources, and writes a local `.env` with connection details resolved by the AppKit plugins.",
"",
Expand All @@ -143,7 +119,7 @@ export function buildFullPrompt(
}
} else {
lines.push(
"### 1. Clone locally and follow `template/README.md`",
"### Clone and follow `template/README.md`",
"",
"Run the command below to clone the DevHub repository locally and enter this example's **`template/`** directory.",
"",
Expand Down
60 changes: 18 additions & 42 deletions tests/build-example-markdown.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -93,18 +93,12 @@ describe("buildFullPrompt", () => {
test("includes get started steps", () => {
const prompt = buildFullPrompt({ ...baseOpts, sections: emptySections });
expect(prompt).toContain("## Get started");
expect(prompt).toContain(
"### 1. Clone locally and follow `template/README.md`",
);
expect(prompt).toContain("### Clone and follow `template/README.md`");
expect(prompt).toContain(minimalExample.initCommand);
expect(prompt).toContain("template/README.md");
expect(prompt).toContain(
"databricks apps init --template https://github.com/databricks/devhub/tree/main/examples/test-example",
);
expect(prompt).not.toContain(
"### 2. Provision or link existing Databricks resources",
);
expect(prompt).not.toContain("### 3. Deploy the application");
});

test("includes content section body", () => {
Expand All @@ -122,7 +116,7 @@ describe("buildFullPrompt", () => {
...baseOpts,
sections: contentOnlySections,
});
const getStartedIdx = prompt.indexOf("### 1. Clone locally");
const getStartedIdx = prompt.indexOf("### Clone and follow");
const rawIdx = prompt.indexOf("This is the example overview");
const sourceIdx = prompt.indexOf("## Source Code");
expect(getStartedIdx).toBeLessThan(rawIdx);
Expand Down Expand Up @@ -257,14 +251,12 @@ describe("example Get started: full prompt (Copy prompt) vs export markdown (Cop
const full = buildFullPrompt({ ...baseOpts, sections: emptySections });
const exportMd = buildAdditionalMarkdown(baseOpts);

expect(full).toContain(
"### 1. Clone locally and follow `template/README.md`",
);
expect(full).toContain("### Clone and follow `template/README.md`");
expect(full).toContain("```bash");
expect(full).toContain(minimalExample.initCommand);

expect(exportMd).toContain(buildExportGetStartedSection(minimalExample));
expect(exportMd).not.toContain("### 1. Clone locally");
expect(exportMd).not.toContain("### Clone and follow");
expect(exportMd).toContain("```bash");
expect(exportMd).toContain(minimalExample.initCommand);
});
Expand Down Expand Up @@ -312,30 +304,21 @@ describe("init-style examples (databricks apps init)", () => {
expect(authIdx).toBeLessThan(initIdx);
});

test("full prompt has auth verification as step 1 before scaffold step 2", () => {
test("full prompt has scaffold section without duplicate auth verification", () => {
const prompt = buildFullPrompt({ ...initOpts, sections: emptySections });
expect(prompt).toContain("### 1. Verify Databricks CLI auth");
expect(prompt).toContain(
"### 2. Scaffold the project with `databricks apps init`",
);
expect(prompt).toContain("databricks auth profiles");
expect(prompt).toContain("databricks auth login --profile");
expect(prompt).not.toContain(
"### 1. Clone locally and follow `template/README.md`",
);
expect(prompt).toContain("### Scaffold the project");
expect(prompt).not.toContain("Verify Databricks CLI auth");
expect(prompt).not.toContain("databricks auth profiles");
expect(prompt).not.toContain("databricks auth login --profile");
expect(prompt).not.toContain("### Clone and follow `template/README.md`");
expect(prompt).toContain(initExample.initCommand);
const authIdx = prompt.indexOf("### 1. Verify Databricks CLI auth");
const scaffoldIdx = prompt.indexOf(
"### 2. Scaffold the project with `databricks apps init`",
);
expect(authIdx).toBeLessThan(scaffoldIdx);
});

test("prereq section is injected as step 2 and scaffold becomes step 3", () => {
test("prereq section is injected before scaffold", () => {
const sections: ExampleSections = {
content: "",
prerequisites: [
"### 2. Create the Lakebase Postgres prerequisites",
"### Create the Lakebase Postgres prerequisites",
"",
"```bash",
"databricks postgres create-project my-proj",
Expand All @@ -346,28 +329,21 @@ describe("init-style examples (databricks apps init)", () => {
...initOpts,
sections,
});
expect(prompt).toContain("### 1. Verify Databricks CLI auth");
expect(prompt).toContain(
"### 2. Create the Lakebase Postgres prerequisites",
);
expect(prompt).toContain(
"### 3. Scaffold the project with `databricks apps init`",
);
expect(prompt).toContain("### Create the Lakebase Postgres prerequisites");
expect(prompt).toContain("### Scaffold the project");
expect(prompt).toContain("databricks postgres create-project my-proj");
const prereqIdx = prompt.indexOf(
"### 2. Create the Lakebase Postgres prerequisites",
);
const scaffoldIdx = prompt.indexOf(
"### 3. Scaffold the project with `databricks apps init`",
"### Create the Lakebase Postgres prerequisites",
);
const scaffoldIdx = prompt.indexOf("### Scaffold the project");
expect(prereqIdx).toBeLessThan(scaffoldIdx);
});

test("deploy section replaces the default README pointer when provided", () => {
const sections: ExampleSections = {
content: "",
deployment: [
"### 3. Install and deploy",
"### Install and deploy",
"",
"```bash",
"npm install",
Expand All @@ -379,7 +355,7 @@ describe("init-style examples (databricks apps init)", () => {
...initOpts,
sections,
});
expect(prompt).toContain("### 3. Install and deploy");
expect(prompt).toContain("### Install and deploy");
expect(prompt).toContain("npm run deploy");
expect(prompt).not.toContain(
"A **`README.md`** ships inside the scaffolded project",
Expand Down