Forwarded from DLeX: AI Python (Farzad 🦅)
دوره کلاسی جدید دانشگاه MIT
MIT Deep Learning for Art, Aesthetics, and Creativity
Generating photorealistic images and arts has been the highlight of AI in 2022.
Covering AI + creativity, GANs, diffusion models, etc.
Videos: https://youtube.com/playlist?list=PLCpMvp7ftsnIbNwRnQJbDNRqO6qiN3EyH
Website: https://ali-design.github.io/deepcreativity/
#منابع #فیلم #کلاس_آموزشی #یادگیری_عمیق
#DeepLearning
❇️ @AI_Python
MIT Deep Learning for Art, Aesthetics, and Creativity
Generating photorealistic images and arts has been the highlight of AI in 2022.
Covering AI + creativity, GANs, diffusion models, etc.
Videos: https://youtube.com/playlist?list=PLCpMvp7ftsnIbNwRnQJbDNRqO6qiN3EyH
Website: https://ali-design.github.io/deepcreativity/
#منابع #فیلم #کلاس_آموزشی #یادگیری_عمیق
#DeepLearning
❇️ @AI_Python
Forwarded from DLeX: AI Python (Farzad 🦅)
Google engineers offered 28 actionable tests for #machinelearning systems. 👇
Introducing 👉 The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction (2017). 👈
If #ml #training is like compilation, then ML testing shall be applied to both #data and code.
7 model tests
1⃣ 👉 Review model specs and version-control it. It makes training auditable and improve reproducibility.
2⃣ 👉 Ensure model loss is correlated with user engagement.
3⃣ 👉 Tune all hyperparameters. Grid search, Bayesian method whatever you use, tune all of them.
4⃣ 👉 Measure the impact of model staleness. The age-versus-quality curve shows what amount of staleness is tolerable.
5⃣ 👉 Test against a simpler model regularly to confirm the benefit more sophisticated techniques.
6⃣ 👉 Check the model quality is good across different data segment, e.g. user countries, movie genre etc.
7⃣ 👉 Test model inclusion by checking against the protected dimensions or enrich under-represented categories.
7 data tests
1⃣ 👉 Capture feature expectations in schema using statistics from data + domain knowledge + expectations.
2⃣ 👉 Use beneficial features only, e.g. training a set of models each with one feature removed.
3⃣ 👉 Avoid costly features. Cost includes running time, RAM as well as upstream work and instability.
4⃣ 👉 Adhere to feature requirements. If certain features can’t be used, enforce it programmatically.
5⃣ 👉 Set privacy controls. Budget enough time for new feature that depends on sensitive data.
6⃣ 👉 Add new features quickly. If conflicting with 5⃣ , privacy goes first.
7⃣ 👉 Test code for all input features. Bugs do exist in feature creation code.
See 7 Infrastructure & 7 monitoring tests in paper. 👇
They interviewed 36 teams across Google and found
👉 Using a checklist helps avoid mistakes (like a surgeon would do).
👉 Data dependencies leads to outsourcing responsibility. Other teams’ validation may not validate your use case.
👉 A good framework promotes integration test which is not well adopted.
👉 Assess the assessment to better assess your system.
https://research.google.com/pubs/archive/aad9f93b86b7addfea4c419b9100c6cdd26cacea.pdf
Introducing 👉 The ML Test Score: A Rubric for ML Production Readiness and Technical Debt Reduction (2017). 👈
If #ml #training is like compilation, then ML testing shall be applied to both #data and code.
7 model tests
1⃣ 👉 Review model specs and version-control it. It makes training auditable and improve reproducibility.
2⃣ 👉 Ensure model loss is correlated with user engagement.
3⃣ 👉 Tune all hyperparameters. Grid search, Bayesian method whatever you use, tune all of them.
4⃣ 👉 Measure the impact of model staleness. The age-versus-quality curve shows what amount of staleness is tolerable.
5⃣ 👉 Test against a simpler model regularly to confirm the benefit more sophisticated techniques.
6⃣ 👉 Check the model quality is good across different data segment, e.g. user countries, movie genre etc.
7⃣ 👉 Test model inclusion by checking against the protected dimensions or enrich under-represented categories.
7 data tests
1⃣ 👉 Capture feature expectations in schema using statistics from data + domain knowledge + expectations.
2⃣ 👉 Use beneficial features only, e.g. training a set of models each with one feature removed.
3⃣ 👉 Avoid costly features. Cost includes running time, RAM as well as upstream work and instability.
4⃣ 👉 Adhere to feature requirements. If certain features can’t be used, enforce it programmatically.
5⃣ 👉 Set privacy controls. Budget enough time for new feature that depends on sensitive data.
6⃣ 👉 Add new features quickly. If conflicting with 5⃣ , privacy goes first.
7⃣ 👉 Test code for all input features. Bugs do exist in feature creation code.
See 7 Infrastructure & 7 monitoring tests in paper. 👇
They interviewed 36 teams across Google and found
👉 Using a checklist helps avoid mistakes (like a surgeon would do).
👉 Data dependencies leads to outsourcing responsibility. Other teams’ validation may not validate your use case.
👉 A good framework promotes integration test which is not well adopted.
👉 Assess the assessment to better assess your system.
https://research.google.com/pubs/archive/aad9f93b86b7addfea4c419b9100c6cdd26cacea.pdf
Forwarded from عتید
◼️◼️◼️◼️◼️◼️◼️◼️
تو سرو نازی و بر چشم مَنت باید جای
که جای سرو بسی خوشتر است بر لب جو
#عمان_سامانی
@atidpoetry
◼️◼️◼️◼️◼️◼️◼️◼️
تو سرو نازی و بر چشم مَنت باید جای
که جای سرو بسی خوشتر است بر لب جو
#عمان_سامانی
@atidpoetry
◼️◼️◼️◼️◼️◼️◼️◼️
Today on the blog learn about the OptFormer, one of the first Transformer-based frameworks for hyperparameter tuning, learned from large-scale optimization data using flexible text-based representations →
https://twitter.com/GoogleAI/status/1560382410792898560?t=HSP-SjlzuOeWCQ0MfJMLzg&s=35
https://twitter.com/GoogleAI/status/1560382410792898560?t=HSP-SjlzuOeWCQ0MfJMLzg&s=35
Twitter
Today on the blog learn about the OptFormer, one of the first Transformer-based frameworks for hyperparameter tuning, learned from large-scale optimization data using flexible text-based representations → https://t.co/0s6xt3TY2v