Skip to content

Commit ba6631a

Browse files
arora-saurabh448ppradneshclaude
authored
docs: add showcase examples with screenshots for examples page (#292)
Replace the generic bulleted grid cards with six detailed showcase examples, each with a real prompt and screenshot: NYC Taxi, Olist E-Commerce, Global CO2 Explorer, Spotify Analytics Migration, US Home Sales Data Science, and Snowflake vs Databricks Benchmark. Co-authored-by: Pradnesh <pradneshpatil@gmail.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent b5242fa commit ba6631a

7 files changed

Lines changed: 46 additions & 20 deletions

File tree

282 KB
Loading
414 KB
Loading
289 KB
Loading
135 KB
Loading
395 KB
Loading
138 KB
Loading

docs/docs/examples/index.md

Lines changed: 46 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -2,48 +2,74 @@
22

33
Real-world examples showing what altimate can do across data engineering workflows. Each example demonstrates end-to-end automation — from discovery to implementation.
44

5-
<div class="grid cards" markdown>
5+
---
66

7-
- :material-pipe:{ .lg .middle } **Build, Test & Document dbt Models**
7+
## NYC Taxi Coverage Dashboard
88

9-
---
9+
`DuckDB` `dbt` `Airflow` `Python`
1010

11-
Pull context from your Knowledge Hub, grab requirements from a Jira ticket, and build fully tested dbt models — all from your IDE.
11+
**Prompt:**
1212

13+
> Take the New York City taxi cab public dataset, bring up a DuckDB instance, and build a dashboard showing areas of maximum coverage and lowest coverage. Set up a complete dbt project with staging, intermediate, and mart layers, and create an Airflow DAG to orchestrate the pipeline.
1314
14-
- :material-snowflake:{ .lg .middle } **Find Broken Views in Snowflake**
15+
![NYC Taxi Coverage Dashboard](../assets/images/nyc_taxi.png)
1516

16-
---
17+
---
1718

18-
Create a "Sprint Work Agent" that queries Snowflake, finds empty views, traces root causes through dbt models, and files Jira tickets.
19+
## Olist E-Commerce Analytics Pipeline
1920

21+
`Snowflake` `Azure Data Factory` `Azure Blob Storage` `dbt`
2022

21-
- :material-cash-multiple:{ .lg .middle } **Optimize Cost & Performance**
23+
**Prompt:**
2224

23-
---
25+
> Build an end-to-end e-commerce analytics pipeline using the Olist Brazilian E-Commerce dataset. Use Azure Data Factory to ingest CSV files from Blob Storage into Snowflake raw tables, then orchestrate Snowflake stored procedures to transform data through raw → staging → mart layers (star schema with customer, product, seller dimensions and orders fact table). Create mart views for customer lifetime value, seller performance scores, and delivery SLA compliance.
2426
25-
Automate discovery and implementation of optimization opportunities across Snowflake, Databricks, and BigQuery.
27+
![ADF Snowflake Pipeline](../assets/images/ADF_Snowflake_Pipeline.png)
2628

29+
---
2730

28-
- :material-swap-horizontal:{ .lg .middle } **Migrate PySpark to dbt**
31+
## Global CO2 & Climate Explorer
2932

30-
---
33+
`DuckDB-WASM` `SQL` `Browser`
3134

32-
Convert a PySpark-based reporting project in Databricks to dbt with automated code conversion, testing, and validation.
35+
**Prompt:**
3336

37+
> Build me an interactive Global CO2 & Climate Explorer dashboard using DuckDB-WASM running entirely in the browser, sourcing data from Our World in Data's CO2 dataset. Give me surprising insights about who emits the most, how that's changing, the equity angle of per-capita emissions, and which countries bear the most historical responsibility. Include an interactive SQL console with example queries showing off CTEs, window functions (LAG, RANK, SUM OVER), and make it a single index.html with a dark theme.
3438
35-
- :material-bug:{ .lg .middle } **Debug an Airflow DAG**
39+
![Global CO2 Explorer](../assets/images/global_co_explorer.png)
3640

37-
---
41+
---
3842

39-
Use AI to debug Airflow DAGs by combining platform integrations, best-practice templates, and automated fix suggestions.
43+
## Spotify Analytics Pipeline Migration
4044

45+
`PySpark` `dbt` `Databricks` `Airflow`
4146

42-
- :material-function:{ .lg .middle } **Write Snowflake UDFs**
47+
**Prompt:**
4348

44-
---
49+
> Modernize my Spotify analytics pipeline: use the Kaggle Spotify Tracks public dataset, migrate all PySpark transformations in /spotify-analytics/ to dbt on Databricks/Spark, preserve the ML feature engineering logic (popularity tiers, mood classification, audio profile scores), add schema tests and unit tests, generate an Airflow DAG with SLAs and alerting, and validate semantic equivalence of the outputs.
4550
46-
Use the Knowledge Hub to guide LLMs in building Snowflake UDFs with best practices, examples, and auto-generated documentation.
51+
![Spotify Analytics Pipeline](../assets/images/spotify_analytics.png)
4752

53+
---
4854

49-
</div>
55+
## US Home Sales Data Science Dashboard
56+
57+
`Data Science` `K-Means` `OLS Regression` `R/ggplot2 Aesthetic`
58+
59+
**Prompt:**
60+
61+
> Download all available public US home sales data sets. Process and merge them into a unified format. Perform advanced data science on it to bring to the surface interesting insights. K-means, OLS regressions, and more. Build a single interactive dashboard with data science style charts, think violin plots, Q-Q plots and lollipop charts. Use a R/ggplot2 aesthetic. No BI style charts.
62+
63+
![US Home Sales Dashboard](../assets/images/us_home_sales.png)
64+
65+
---
66+
67+
## Snowflake vs Databricks Deployment Benchmark
68+
69+
`Snowflake` `Databricks` `Benchmarking` `Cost Analysis`
70+
71+
**Prompt:**
72+
73+
> The NovaMart e-commerce analytics platform in the current directory is ready for deployment. Deploy to both Snowflake and Databricks, testing multiple warehouse sizes on each platform (Snowflake: X-Small, Small, Medium; Databricks: 2X-Small, Small, Medium SQL Warehouses) to find the optimal price-performance configuration. Run the full data pipeline and benchmark queries (CLV calculation, daily incremental, executive dashboard) on each warehouse size, capturing execution time, credits/DBUs consumed, and bytes scanned. Generate a cost analysis document with a recommendation matrix showing cost-per-run for each platform/size combination, and recommend the single best platform + warehouse size for production based on cost efficiency and performance.
74+
75+
![Snowflake vs Databricks Benchmark](../assets/images/dbrx_snowflake_benchmark.png)

0 commit comments

Comments
 (0)