Data Apps Design – Telegram
Data Apps Design
1.54K subscribers
143 photos
2 videos
41 files
231 links
В этом блоге я публикую свои выводы и мнения на работу в Data:

— Data Integration
— Database engines
— Data Modeling
— Business Intelligence
— Semantic Layer
— DataOps and DevOps
— Orchestrating jobs & DAGs
— Business Impact and Value
Download Telegram
Schema evolution showcase

Data is ever changing. A couple of examples you will definitely face with while dealing with data pipelines:

– adding new attributes, removing old ones
– changing data types: int -> float, resizeing text
– renaming columns (change mapping)

Got a couple of new events with an attribute exceeding current maximum length in database.

Simple approach is to resize problem column:

ALTER TABLE hevo.wheely_prod_orders ALTER COLUMN stops TYPE VARCHAR(4096) ;

A lot of ELT tools can do it automatically without DE attention.

In my case it's tricky as I am using Materialized Views:

> SQL Error [500310] [0A000]: Amazon Invalid operation: cannot alter type of a column used by a materialized view

As a result I had to drop dependent objects, resize columns, then re-create objects once again

DROP MATERIALIZED VIEW flatten.flt_orders CASCADE ;
DROP MATERIALIZED VIEW dbt_test.flt_orders CASCADE ;
DROP MATERIALIZED VIEW ci.flt_orders CASCADE ;

ALTER TABLE hevo.wheely_prod_orders ALTER COLUMN stops TYPE VARCHAR(4096) ;

dbt run -m flt_orders+1 --target prod
dbt run -m flt_orders+1 --target dev
dbt run -m flt_orders+1 --target ci

#pipelines #schemaevolution
One of the things to improve code quality is doing proper reviews

– Require at least 2 reviews for a PR
– Assign reviewers automatically according to CODEOWNERS file
– Require status checks to pass before merging

CODEOWNERS could look as simple as:

*  @owner1 @owner2 @owner3

/models/flatten/ @owner1
/models/staging/ @owner2
/models/marts/ @owner3

Another thing is testing expectations of your data

As soon as you face any bug or issue, add a test on it to not miss it next time!

My new expectation is simply not empty table – expect_table_row_count_to_be_between from dbt_expectations package

  tests:
- dbt_expectations.expect_table_row_count_to_be_between:
min_value: 100

#codereview #codequality
17 days. The end 😅
Metabase + Clickhouse tutorial on Yandex.Cloud

1. Spin up a Virtual Machine
2. Configure VM: SSH, Docker
3. Download Clickhouse plug-in
4. Deploy Metabase
5. Connect to Clickhouse Playground
6. Visualize a question

https://gist.github.com/kzzzr/ecec7dca8bb70586a23569993df470e8

#bi #clickhouse
Prior to diving deep into complex analytics modeling topics like Sessionization, Attribution, RFM, one has to understand the motivation behind it.

It is the questions that drive business decisions first, then instruments and practices to find reliable answers to these questions.

Which questions does your business ask?

#analytics #modeling
Who knows what the heck is this?
Telegram sold out?
I just tried to upgrade onto 21.10 from 21.08.

After 30 minutes this cluster is never coming back.

I hope I can create a new one and restore backup onto it.
We are in the middle of migration from Amazon Redshift DC2 nodes (2nd gen) to RA3 nodes (3rd gen) at Wheely.

What this means for us:
– Almost unlimited Disk Space (RA3 separate compute and storage)
– Speeding up Data Marts to 2hrs delay from real-time
– Blue/green deployments

I will follow up as soon as we are finished.

Attached simplified checklist plan.

Any questions welcomed.
Привет! Сегодня 18 ноября в 15.00 приглашаю на вебинар.

Полуструктурированные данные в Аналитических Хранилищах: Nested JSON + Arrays

- Источники полуструктурированных данных: Events, Webhooks, Logs
- Подходы: JSON functions, special data types, External tables (Lakehouse)
- Оптимизация производительности

Смотрим на примерах Amazon Redshift, Clickhouse.

Ссылка на регистрацию: https://otus.ru/lessons/dwh/#event-1661
Ссылка на youtube-трансляцию будет опубликована здесь за 5 минут до начала.
Data Apps Design
Привет! Сегодня 18 ноября в 15.00 приглашаю на вебинар. Полуструктурированные данные в Аналитических Хранилищах: Nested JSON + Arrays - Источники полуструктурированных данных: Events, Webhooks, Logs - Подходы: JSON functions, special data types, External…
So the process of Amazon Redshift cluster migration is almost completed.
New cluster is way more powerful. Now seeking ways to fully utilize its resources 😄

I can state that not everything has gone as expected.
The most painful parts turned out to be:

– Migrating S3 bucket with 1M+ files to a new region (took ~4-5 hours) – really challenging
– Not losing data events while switching between clusters
– VPC and network issues (connecting from BI tool)
– Hotfixing several Python UDFs suddenly not working on a new environment

In some time I will publish a detailed reflection on this process.