📝Auto-generate summaries from Google Docs
Google Docs now automatically generate summaries of their content. summaries of content when available. While all users can add summaries, auto-generated suggestions are currently only available to Google Workspace business customers.
This is achieved through natural language understanding (NLU) and natural language generation (NLG) ML models, especially Transformer and Pegasus. A popular technique for combining NLU and NLG is to train a machine learning model using sequence-to-sequence learning, where the input is the words of the document and the output is the final words. The neural network then learns to map input tokens to output tokens. Early applications of the sequence-to-sequence paradigm used recurrent neural networks (RNNs) for both the encoder and decoder.
The introduction of Transformers has provided a promising alternative to RNNs due to internal attention for better modeling of long input and output dependencies, which is critical when summarizing documents. However, these models require large amounts of manually labeled data for sufficient training, so the appearance of Transformers alone was not enough to make significant progress in the field of document summarization.
The combination of Transformers with self-supervised preconditioning (BERT, GPT, T5) has led to major breakthroughs in many NLU problems for which limited labeled data is available. In self-supervised pre-learning, the model uses large amounts of unlabeled text to learn general language understanding and generation capabilities. Then, in a subsequent fine-tuning step, the model learns to apply these abilities to a specific task, such as debriefing or answering questions.
Pegasus' work takes this idea one step further by introducing a pre-workout goal tailored to abstract generalization. In Pegasus pre-training, also called Sentence Gap Prediction (GSP), full sentences from untagged news articles and web documents are masked from the input and a model is required to reconstruct them based on the remaining untagged sentences. In particular, GSP attempts to mask sentences that are considered important to the document with various heuristics to make pre-training as close to a debriefing task as possible. Pegasus has achieved state-of-the-art results on a diverse set of summation datasets.
Taking advantage of Transformer and Pegasus, the Google AI researchers carefully cleaned and filtered the fine-tuning data to contain training examples that were more consistent and presented a coherent definition of the summary text. Despite the reduction in the amount of training data, this resulted in a better model. Then the problem of maintaining a high-quality model in production was solved. Although the Transformer version of the encoder-decoder architecture is the dominant approach to model training for sequential sequence transformation problems such as abstract summation, it can be inefficient and impractical for use in real world applications. The main inefficiency is associated with the Transformer decoder, where the output summary token is generated sequentially through autoregressive decoding. The decoding process becomes noticeably slower as summaries get longer as the decoder processes all previously generated tokens at each step. RNNs are a more efficient architecture for decoding, since there is no internal attention when using the previous tokens, as in the Transformer model.
After transferring knowledge from a large model to a more efficient smaller model to transform the Pegasus model into a hybrid architecture of the Transformer encoder and RNN decoder, the number of layers of the RNN decoder was reduced to improve efficiency. The resulting model has improved delays and memory, while maintaining the original quality.
https://ai.googleblog.com/2022/03/auto-generated-summaries-in-google-docs.html
Google Docs now automatically generate summaries of their content. summaries of content when available. While all users can add summaries, auto-generated suggestions are currently only available to Google Workspace business customers.
This is achieved through natural language understanding (NLU) and natural language generation (NLG) ML models, especially Transformer and Pegasus. A popular technique for combining NLU and NLG is to train a machine learning model using sequence-to-sequence learning, where the input is the words of the document and the output is the final words. The neural network then learns to map input tokens to output tokens. Early applications of the sequence-to-sequence paradigm used recurrent neural networks (RNNs) for both the encoder and decoder.
The introduction of Transformers has provided a promising alternative to RNNs due to internal attention for better modeling of long input and output dependencies, which is critical when summarizing documents. However, these models require large amounts of manually labeled data for sufficient training, so the appearance of Transformers alone was not enough to make significant progress in the field of document summarization.
The combination of Transformers with self-supervised preconditioning (BERT, GPT, T5) has led to major breakthroughs in many NLU problems for which limited labeled data is available. In self-supervised pre-learning, the model uses large amounts of unlabeled text to learn general language understanding and generation capabilities. Then, in a subsequent fine-tuning step, the model learns to apply these abilities to a specific task, such as debriefing or answering questions.
Pegasus' work takes this idea one step further by introducing a pre-workout goal tailored to abstract generalization. In Pegasus pre-training, also called Sentence Gap Prediction (GSP), full sentences from untagged news articles and web documents are masked from the input and a model is required to reconstruct them based on the remaining untagged sentences. In particular, GSP attempts to mask sentences that are considered important to the document with various heuristics to make pre-training as close to a debriefing task as possible. Pegasus has achieved state-of-the-art results on a diverse set of summation datasets.
Taking advantage of Transformer and Pegasus, the Google AI researchers carefully cleaned and filtered the fine-tuning data to contain training examples that were more consistent and presented a coherent definition of the summary text. Despite the reduction in the amount of training data, this resulted in a better model. Then the problem of maintaining a high-quality model in production was solved. Although the Transformer version of the encoder-decoder architecture is the dominant approach to model training for sequential sequence transformation problems such as abstract summation, it can be inefficient and impractical for use in real world applications. The main inefficiency is associated with the Transformer decoder, where the output summary token is generated sequentially through autoregressive decoding. The decoding process becomes noticeably slower as summaries get longer as the decoder processes all previously generated tokens at each step. RNNs are a more efficient architecture for decoding, since there is no internal attention when using the previous tokens, as in the Transformer model.
After transferring knowledge from a large model to a more efficient smaller model to transform the Pegasus model into a hybrid architecture of the Transformer encoder and RNN decoder, the number of layers of the RNN decoder was reduced to improve efficiency. The resulting model has improved delays and memory, while maintaining the original quality.
https://ai.googleblog.com/2022/03/auto-generated-summaries-in-google-docs.html
research.google
Auto-generated Summaries in Google Docs
Posted by Mohammad Saleh, Software Engineer, Google Research, Brain Team and Anjuli Kannan, Software Engineer, Google Docs For many of us, it can b...
#test
What is the difference between XGBoost and LightGBM?
What is the difference between XGBoost and LightGBM?
Anonymous Quiz
17%
There is no difference, they are the same
9%
XGBoost supports categorical features
50%
LightGBM supports categorical features
24%
XGBoost is based on gradient boosting, LightGBM is based on logistic regression
👍5🤔2
📝Dataframe validation with Pandera
In large DS projects, the Great Expectations framework can be used to validate the dataset and check the quality of the data. However, smaller tasks require simpler tools. For example, the lightweight Python library Pandera, which explicitly checks information in dataframes at runtime. Pandera allows you to define a data schema once using a class-based API with pydantic syntax and use it to validate various types of dataframes, including pandas, dask, modin, and pyspark.pandas. You can check the types and properties of columns in pd.DataFrame or values in pd.Series, perform more complex statistical testing such as hypothesis testing. You can synthesize data from schema objects for property-based testing using pandas data structures.
Function decorators allow you to integrate with existing data analysis/processing pipelines using function decorators. With lazy validation, you can validate dataframes before errors occur. Finally, compatibility with other Python tools such as pydantic, fastapi, and mypy makes Pandera a useful tool for the ML developer and data analyst.
Documentation: https://pandera.readthedocs.io/en/stable/
Example: https://towardsdatascience.com/validate-your-pandas-dataframe-with-pandera-2995910e564
In large DS projects, the Great Expectations framework can be used to validate the dataset and check the quality of the data. However, smaller tasks require simpler tools. For example, the lightweight Python library Pandera, which explicitly checks information in dataframes at runtime. Pandera allows you to define a data schema once using a class-based API with pydantic syntax and use it to validate various types of dataframes, including pandas, dask, modin, and pyspark.pandas. You can check the types and properties of columns in pd.DataFrame or values in pd.Series, perform more complex statistical testing such as hypothesis testing. You can synthesize data from schema objects for property-based testing using pandas data structures.
Function decorators allow you to integrate with existing data analysis/processing pipelines using function decorators. With lazy validation, you can validate dataframes before errors occur. Finally, compatibility with other Python tools such as pydantic, fastapi, and mypy makes Pandera a useful tool for the ML developer and data analyst.
Documentation: https://pandera.readthedocs.io/en/stable/
Example: https://towardsdatascience.com/validate-your-pandas-dataframe-with-pandera-2995910e564
Medium
Validate Your pandas DataFrame with Pandera
Make Sure Your Data Matches Your Expectation
👍2
💥Why you need Modin: Pandas alternative for fast big data processing
Handling large frames of data with Pandas is slow because this Python library does not support working with data that does not fit in available memory. As a result, Pandas workflows that work well for prototyping a few MB of data don't scale to a real or hundreds of real GB dataset. Therefore, due to the single-threaded execution of operations in RAM, Pandas is not very suitable for processing really large data sets. with a wide range of data. There is an alternative - the Modin, Python-library with a Pandas-like API that scales to all processor cores using the Dask or Ray engine.
Modin supports working with data that won't fit in, so you can comfortably work with hundreds of GB without worrying about massive memory slowdowns or memory errors. With support for the cluster and beyond the core, Modin represents the use of a DataFrame with exceptional performance on a single node and high scalability in a cluster.
In the context of an algorithm (no cluster), Modin will create and manage a local (Dask or Ray) cluster for execution. There is no need to suggest how to evaluate the data, or even know how many cores the system has. Extraction, you can use code with Pandas by simply changing the library import statement from pandas to modin.pandas and getting a significant speedup even on a single machine. Modin speeds up to 4x on a laptop with 4 main cores.
Docs: https://modin.readthedocs.io/en/latest/index.html
Github: https://github.com/modin-project/modin
Handling large frames of data with Pandas is slow because this Python library does not support working with data that does not fit in available memory. As a result, Pandas workflows that work well for prototyping a few MB of data don't scale to a real or hundreds of real GB dataset. Therefore, due to the single-threaded execution of operations in RAM, Pandas is not very suitable for processing really large data sets. with a wide range of data. There is an alternative - the Modin, Python-library with a Pandas-like API that scales to all processor cores using the Dask or Ray engine.
Modin supports working with data that won't fit in, so you can comfortably work with hundreds of GB without worrying about massive memory slowdowns or memory errors. With support for the cluster and beyond the core, Modin represents the use of a DataFrame with exceptional performance on a single node and high scalability in a cluster.
In the context of an algorithm (no cluster), Modin will create and manage a local (Dask or Ray) cluster for execution. There is no need to suggest how to evaluate the data, or even know how many cores the system has. Extraction, you can use code with Pandas by simply changing the library import statement from pandas to modin.pandas and getting a significant speedup even on a single machine. Modin speeds up to 4x on a laptop with 4 main cores.
Docs: https://modin.readthedocs.io/en/latest/index.html
Github: https://github.com/modin-project/modin
GitHub
GitHub - modin-project/modin: Modin: Scale your Pandas workflows by changing a single line of code
Modin: Scale your Pandas workflows by changing a single line of code - modin-project/modin
👍3
#test
Support-vector machine (SVM) method is used for
Support-vector machine (SVM) method is used for
Anonymous Quiz
87%
classification and regression analysis
2%
text generating with NLP
7%
recomendation systems
4%
prediction on noisy data
👍4
Z-scoring for simple and fast anomaly detection
Anomaly detection is a fairly common problem that covers many scenarios, from financial fraud to computer network failures. Some problems require complex machine learning models, but most often some simpler and cheaper methods are sufficient. For example, you have sales data over a period of time where you want to flag days with abnormally high volumes or highlight customers with abnormally high credit card swipes for risk testing.
For such cases, a simple statistical method of marking outliers, called Z-scoring, will do. The score is equal to the difference between the current and mean values, divided by the standard deviation. Z-scoring assumes the classical normal distribution of random variables. Converting nominal scale values to a logarithmic scale will improve the ability of most ML models to discern relationships and improve the ability of Z-scores to flag outliers.
Anomaly detection is a fairly common problem that covers many scenarios, from financial fraud to computer network failures. Some problems require complex machine learning models, but most often some simpler and cheaper methods are sufficient. For example, you have sales data over a period of time where you want to flag days with abnormally high volumes or highlight customers with abnormally high credit card swipes for risk testing.
For such cases, a simple statistical method of marking outliers, called Z-scoring, will do. The score is equal to the difference between the current and mean values, divided by the standard deviation. Z-scoring assumes the classical normal distribution of random variables. Converting nominal scale values to a logarithmic scale will improve the ability of most ML models to discern relationships and improve the ability of Z-scores to flag outliers.
In practice the implementation of Z-score is very simple: it can be written as a small software noscript or even a set of SQL queries to quickly get a lightweight MVP and quickly test a hypothesis.
https://towardsdatascience.com/anomaly-detection-in-sql-2bcd8648f7a8
https://towardsdatascience.com/anomaly-detection-in-sql-2bcd8648f7a8
Medium
Anomaly Detection in SQL
How to implement fast, powerful, anomaly detection models directly in the data warehouse
🕸✍🏻3 Python-libraries for working with URLs
The task of processing URLs is quite common in practice. For example, make a list of the most frequently visited sites or those that are allowed to be visited during business hours from corporate computers. To automate such cases, the following Python libraries are useful:
• Yarl - allows you to extract features from a URL, provides a convenient class for parsing and changing the address of a web resource. But it only works with Python 3 and does not accept boolean values in the API - you need to convert boolean values to strings yourself using the desired translation protocol. https://github.com/aio-libs/yarl
• Furl - makes parsing and manipulating URLs easier. The library has a wide range of features, but also a number of limitations. In particular, the furl object can change, so problems can occur when passing it to the outside. https://github.com/gruns/furl
• URLObject - A utility class for manipulating URLs with a clean API that focuses on proper method names rather than operator overrides. The object itself is immutable here, each URL change creates a new URL object. But the library does not perform any decoding / encoding transformations, which the user has to deal with on their own. https://github.com/zacharyvoase/urlobject
The task of processing URLs is quite common in practice. For example, make a list of the most frequently visited sites or those that are allowed to be visited during business hours from corporate computers. To automate such cases, the following Python libraries are useful:
• Yarl - allows you to extract features from a URL, provides a convenient class for parsing and changing the address of a web resource. But it only works with Python 3 and does not accept boolean values in the API - you need to convert boolean values to strings yourself using the desired translation protocol. https://github.com/aio-libs/yarl
• Furl - makes parsing and manipulating URLs easier. The library has a wide range of features, but also a number of limitations. In particular, the furl object can change, so problems can occur when passing it to the outside. https://github.com/gruns/furl
• URLObject - A utility class for manipulating URLs with a clean API that focuses on proper method names rather than operator overrides. The object itself is immutable here, each URL change creates a new URL object. But the library does not perform any decoding / encoding transformations, which the user has to deal with on their own. https://github.com/zacharyvoase/urlobject
GitHub
GitHub - aio-libs/yarl: Yet another URL library
Yet another URL library. Contribute to aio-libs/yarl development by creating an account on GitHub.
👍2
#test
Why multicollinearity of features is not ok for ML?
Why multicollinearity of features is not ok for ML?
Anonymous Quiz
18%
It is too hard to define dependent variables in learning dataset
60%
It reduces reliability of results and speed of calculations with raise of the scope of feature space
8%
It reduces confidence intervals
15%
It increases the complexity of Ml-algorithms
👍2🥰2
👆🏻Something about deduplication with DISTINCT
You can exclude duplicates from the selection by simply adding the DISTINCT keyword to the SQL query. However, this simple solution will not always be correct. To ensure that there are no duplicates in a data set, the DBMS needs to compare all rows with each other, filtering out duplicates. This requires a lot of CPU and memory resources to store all the strings. they need to be compared with each other in memory, even if the hash is being worked on at a low level. In addition, DISTINCT reduces computational parallelism by slowing down query execution.
DISTINCT removes duplicates, but does not resolve incorrect joins and filters, which in practice most often lead to repetitions, for example, due to CROSS JOIN or using RANK instead of ROW_NUMBER, which leads to duplication due to a poorly defined section window. See here for details with code examples: https://jmarquesdatabeyond.medium.com/sql-like-a-pro-please-stop-using-distinct-31bdb6481256
You can exclude duplicates from the selection by simply adding the DISTINCT keyword to the SQL query. However, this simple solution will not always be correct. To ensure that there are no duplicates in a data set, the DBMS needs to compare all rows with each other, filtering out duplicates. This requires a lot of CPU and memory resources to store all the strings. they need to be compared with each other in memory, even if the hash is being worked on at a low level. In addition, DISTINCT reduces computational parallelism by slowing down query execution.
DISTINCT removes duplicates, but does not resolve incorrect joins and filters, which in practice most often lead to repetitions, for example, due to CROSS JOIN or using RANK instead of ROW_NUMBER, which leads to duplication due to a poorly defined section window. See here for details with code examples: https://jmarquesdatabeyond.medium.com/sql-like-a-pro-please-stop-using-distinct-31bdb6481256
Medium
SQL Like a Pro: Please Stop Using Distinct!!
Every time I see a “DISTINCT” I ask the same question: Why??
🔥2
💥DataSpell: A professional data science development environment from JetBrains
Lacking a comfortable development environment in a lightweight Jupyter notebook? Need to write Python code in a reliable IDE with all DS libraries? Try DataSpell by JetBrains, a professional IDE like PyCharm that combines many popular data analysis and machine learning libraries with a powerful set of developer tools.
Released in 2020, today DataSpell is in demand by machine learning developers and data analysts around the world.
https://www.jetbrains.com/ru-ru/dataspell/
Lacking a comfortable development environment in a lightweight Jupyter notebook? Need to write Python code in a reliable IDE with all DS libraries? Try DataSpell by JetBrains, a professional IDE like PyCharm that combines many popular data analysis and machine learning libraries with a powerful set of developer tools.
Released in 2020, today DataSpell is in demand by machine learning developers and data analysts around the world.
https://www.jetbrains.com/ru-ru/dataspell/
JetBrains
JetBrains DataSpell: The IDE for Data Scientists.
JetBrains DataSpell is an IDE for data science with intelligent Jupyter notebooks, interactive Python noscripts, and lots of other built-in tools.
🔥1🤯1
#test
What could be used to avoid the risk of ML-model's overfitting?
What could be used to avoid the risk of ML-model's overfitting?
Anonymous Quiz
8%
Normalization
83%
Regularization
5%
Normalization
4%
Optimization
👍1
☀️TOP-15 Data Science and ML conferences all over the World in May 2022:
• 5-6 May - The #1 MLOps Conference on the planet - Marriott Marquis, New York, NY https://rev.dominodatalab.com/
• 5-6 May - Data Innovation Summit 2022 - KISTAMÄSSAN, STOCKHOLM https://datainnovationsummit.com/
• 10-12 May - Wrangle Summit 2022 Virtual https://www.trifacta.com/events/wrangle-summit-2022/
• 11-12 May - Big Data & AI World. Frankfurt, Germany. https://www.bigdataworldfrankfurt.de/
• 12-13 May - The Data Science Conference. Chicago, IL, USA https://www.thedatascienceconference.com/
• 12 - May 9AM ET, Ontotext Demo-Day. Virtual. https://event.gotowebinar.com/event/bfd3b6ef-828c-46a1-a644-b4e785cece6c
• 15-18 - May FLAIRS-35: Special Track on Neural Networks and Data Mining, Jensen Beach, FL, USA. https://sites.google.com/view/flairs-35-nn-dm-track/home
• 17 May - The data dividend: reimagining data strategies to deepen insight. San Francisco, CA, USA https://events.economist.com/custom-events/the-data-dividend-san-francisco/
• 18 May - Data Science Mini Salon | AI and ML in Retail & E-Commerce. Virtual. https://www.datascience.salon/retail-and-ecommerce
• 23-25 May - TDWI Visualization, Dashboards, and Analytics Adoption https://tdwi.org/events/seminars/may/dashboards-visualization-analytics-adoption/home.aspx
• 24-25 May - Graph + AI Summit. Virtual. https://www.tigergraph.com/graphaisummit
• 24-25 May - Chief Data & Analytics Officers, Insurance US. New York, NY, USA. https://cdaoi.coriniumintelligence.com/
• 25-26 May - Data Reliability Engineering Conference. Virtual https://drecon.org/
• 26 May - Zero Gravity: A Modern Cloud Data Pipeline Event. Virtual. https://www.incorta.com/zerogravity
• 30 May – HeyGrowth - Yerevan, Armenia https://heygrowth.com/yerevan
• 5-6 May - The #1 MLOps Conference on the planet - Marriott Marquis, New York, NY https://rev.dominodatalab.com/
• 5-6 May - Data Innovation Summit 2022 - KISTAMÄSSAN, STOCKHOLM https://datainnovationsummit.com/
• 10-12 May - Wrangle Summit 2022 Virtual https://www.trifacta.com/events/wrangle-summit-2022/
• 11-12 May - Big Data & AI World. Frankfurt, Germany. https://www.bigdataworldfrankfurt.de/
• 12-13 May - The Data Science Conference. Chicago, IL, USA https://www.thedatascienceconference.com/
• 12 - May 9AM ET, Ontotext Demo-Day. Virtual. https://event.gotowebinar.com/event/bfd3b6ef-828c-46a1-a644-b4e785cece6c
• 15-18 - May FLAIRS-35: Special Track on Neural Networks and Data Mining, Jensen Beach, FL, USA. https://sites.google.com/view/flairs-35-nn-dm-track/home
• 17 May - The data dividend: reimagining data strategies to deepen insight. San Francisco, CA, USA https://events.economist.com/custom-events/the-data-dividend-san-francisco/
• 18 May - Data Science Mini Salon | AI and ML in Retail & E-Commerce. Virtual. https://www.datascience.salon/retail-and-ecommerce
• 23-25 May - TDWI Visualization, Dashboards, and Analytics Adoption https://tdwi.org/events/seminars/may/dashboards-visualization-analytics-adoption/home.aspx
• 24-25 May - Graph + AI Summit. Virtual. https://www.tigergraph.com/graphaisummit
• 24-25 May - Chief Data & Analytics Officers, Insurance US. New York, NY, USA. https://cdaoi.coriniumintelligence.com/
• 25-26 May - Data Reliability Engineering Conference. Virtual https://drecon.org/
• 26 May - Zero Gravity: A Modern Cloud Data Pipeline Event. Virtual. https://www.incorta.com/zerogravity
• 30 May – HeyGrowth - Yerevan, Armenia https://heygrowth.com/yerevan
Dominodatalab
Rev 4: MLOps and Data Science Conference | Powered by Domino
Rev is the largest MLOps and Data Science conference that happens just once a year. Where movers and shakers in the industry gather for two days of unparalleled learnings, mind-expanding conversations, interactive sessions and networking with industry luminaries.
🔥3
💫Continuous Machine Learning: CML for CI/CD
Need to introduce CI / CD in the development of ML systems? Try CML, an open source CLI tool from Iterative.ai for implementing CI/CD within MLOps. It is suitable for automating ML model development workflows, including provisioning, training and evaluation, comparison of experiments in the history of the project, and monitoring of changing datasets. CML is based on the following principles:
• GitLab or GitHub for managing ML experiments, monitoring model training and data changes using DVC;
• Automated reports for machine learning experiments with metrics and graphs on every Git pull to make informed decisions based on data.
• no additional services - only GitLab, Bitbucket or GitHub, Docker and DVC. Optionally, you can add cloud storage, as well as self-hosted or cloud workers such as AWS EC2 or MS Azure.
CML introduces CI/CD-style automation into the workflow: most of the configurations are defined in the cml.yaml file stored in the repository. This file specifies what actions should be taken when a new feature branch is ready to be merged into the main branch. When a pull request is created, GitHub Actions uses this workflow and performs the actions specified in the configuration file.
Source code: https://github.com/iterative/cml
Documentation: https://cml.dev/doc
Use case example: https://towardsdatascience.com/continuous-machine-learning-e1ffb847b8da
Need to introduce CI / CD in the development of ML systems? Try CML, an open source CLI tool from Iterative.ai for implementing CI/CD within MLOps. It is suitable for automating ML model development workflows, including provisioning, training and evaluation, comparison of experiments in the history of the project, and monitoring of changing datasets. CML is based on the following principles:
• GitLab or GitHub for managing ML experiments, monitoring model training and data changes using DVC;
• Automated reports for machine learning experiments with metrics and graphs on every Git pull to make informed decisions based on data.
• no additional services - only GitLab, Bitbucket or GitHub, Docker and DVC. Optionally, you can add cloud storage, as well as self-hosted or cloud workers such as AWS EC2 or MS Azure.
CML introduces CI/CD-style automation into the workflow: most of the configurations are defined in the cml.yaml file stored in the repository. This file specifies what actions should be taken when a new feature branch is ready to be merged into the main branch. When a pull request is created, GitHub Actions uses this workflow and performs the actions specified in the configuration file.
Source code: https://github.com/iterative/cml
Documentation: https://cml.dev/doc
Use case example: https://towardsdatascience.com/continuous-machine-learning-e1ffb847b8da
GitHub
GitHub - iterative/cml: ♾️ CML - Continuous Machine Learning | CI/CD for ML
♾️ CML - Continuous Machine Learning | CI/CD for ML - iterative/cml
👍1🔥1
#test
What method in Apahe Spark deals with File System instead of RAM?
What method in Apahe Spark deals with File System instead of RAM?
Anonymous Quiz
46%
partitionBy()
15%
coelesce()
39%
repartition()
🔥2
YDB: scalable fault-tolerant NewSQL DBMS from Yandex. Now open source
April 19, 2022. Yandex has published the source code of the distributed NewSQL DBMS YDB, which allows you to create scalable, fault-tolerant services that can support a large operational load. The code is available under the Apache 2.0 license.
YDB is an open-source Distributed SQL Database that combines high availability and scalability with strict consistency and ACID transactions. YDB observes the occurrence of occurrences and recovery in the event of occurrences from the formation or even the occurrence of the center of the day. The reliability of YDB has been tested on Yandex services (Alisa, Taxi, Market, Metrika and almost 500 more projects). You can deploy YDB both at will and on external servers, including Yandex Cloud or providers.
https://ydb.tech/
https://github.com/ydb-platform/ydb
April 19, 2022. Yandex has published the source code of the distributed NewSQL DBMS YDB, which allows you to create scalable, fault-tolerant services that can support a large operational load. The code is available under the Apache 2.0 license.
YDB is an open-source Distributed SQL Database that combines high availability and scalability with strict consistency and ACID transactions. YDB observes the occurrence of occurrences and recovery in the event of occurrences from the formation or even the occurrence of the center of the day. The reliability of YDB has been tested on Yandex services (Alisa, Taxi, Market, Metrika and almost 500 more projects). You can deploy YDB both at will and on external servers, including Yandex Cloud or providers.
https://ydb.tech/
https://github.com/ydb-platform/ydb
ydb.tech
YDB — Beyond Distributed SQL Database
YDB is an AI-powered Distributed SQL DBMS that unifies transactional, analytical, federated, and streaming workloads, delivers strict consistency and high availability, and brings AI capabilities directly to developers.
🔥3
🗒Loguru for logging Python noscripts
This library is useful for ML specialists and data engineers who often write in Python. It automates the logging and debugging process. In addition, Loguru includes a number of useful features that ensure that standard logging facilities are retained.
Loguru works according to a plug-and-play recipe and has features such as collapsing multiple event logs, quickly compressing log files, and deleting them regularly. It also supports multi-threaded security and log highlighting. This open source library can be used in conjunction with email media to receive email emails or to send other types of messages.
Finally, Loguru supports correlation with a large Python research module, increases the transmission of all information, measures the initial concentration of the logger, in Loguru.
Source code: https://github.com/Delgan/loguru
Use case example: https://medium.com/geekculture/python-loguru-a-powerful-logging-module-5f4208f4f78c
This library is useful for ML specialists and data engineers who often write in Python. It automates the logging and debugging process. In addition, Loguru includes a number of useful features that ensure that standard logging facilities are retained.
Loguru works according to a plug-and-play recipe and has features such as collapsing multiple event logs, quickly compressing log files, and deleting them regularly. It also supports multi-threaded security and log highlighting. This open source library can be used in conjunction with email media to receive email emails or to send other types of messages.
Finally, Loguru supports correlation with a large Python research module, increases the transmission of all information, measures the initial concentration of the logger, in Loguru.
Source code: https://github.com/Delgan/loguru
Use case example: https://medium.com/geekculture/python-loguru-a-powerful-logging-module-5f4208f4f78c
GitHub
GitHub - Delgan/loguru: Python logging made (stupidly) simple
Python logging made (stupidly) simple. Contribute to Delgan/loguru development by creating an account on GitHub.
🔥4👍1
#test
What is the main difference between MapReduce-operations in Spark and Hadoop?
What is the main difference between MapReduce-operations in Spark and Hadoop?
Anonymous Quiz
41%
Spark is faster
8%
Hadoop is faster
27%
these are the same
24%
the different dataset's scale
🔥6
🔥TOP 5 New Python Alpha 5
At the beginning of 2022, a new version of Python was released - Alpha 5 (3.11). Main features:
• Improved debugging with the exception chain of their messages. Python 2022 is showing features with trace detection pointing directly to where the error occurs. Python 2 had a similar feature, but required the addition of context to the code, which made things more difficult. Now context is done automatically.
• Variable exception handling - you can now reduce dependency in different ways depending on what other exceptions it is associated with. The ability to use multiple exception operations with an explicit random exception for all. Just create a big try/except block with all possible exception names, and then add more exclude statements to it. It is for this purpose that a group of exceptions, which can be associated with grouping many different exceptions together and applying a single handler function, is only called if something occurs internally due to exceptional probability.
• Variadic Generics - now you can create functions that take a variable number of arguments (up to 22). It was necessary to define a characteristic that could take into account the magnitude of the transferred value each time. Variadic Generics in Python 3.6 allows you to select any number of options at once, which is useful when iterating multiple operations.
• CPython performance optimization. Changes to functions applied to calls and word lookups should reduce overhead, call by the C stack, speed up everything from developing object-oriented code to accessing data dictionaries.
• Simplify work in other languages such as JavaScript on top of Python through high performance and parallel computing.
https://morioh.com/p/af7debd024e2
https://medium.com/@Sabrina-Carpenter/python-alpha-5-is-here-5-promising-features-that-will-blow-your-mind-a4abd406d0ad
At the beginning of 2022, a new version of Python was released - Alpha 5 (3.11). Main features:
• Improved debugging with the exception chain of their messages. Python 2022 is showing features with trace detection pointing directly to where the error occurs. Python 2 had a similar feature, but required the addition of context to the code, which made things more difficult. Now context is done automatically.
• Variable exception handling - you can now reduce dependency in different ways depending on what other exceptions it is associated with. The ability to use multiple exception operations with an explicit random exception for all. Just create a big try/except block with all possible exception names, and then add more exclude statements to it. It is for this purpose that a group of exceptions, which can be associated with grouping many different exceptions together and applying a single handler function, is only called if something occurs internally due to exceptional probability.
• Variadic Generics - now you can create functions that take a variable number of arguments (up to 22). It was necessary to define a characteristic that could take into account the magnitude of the transferred value each time. Variadic Generics in Python 3.6 allows you to select any number of options at once, which is useful when iterating multiple operations.
• CPython performance optimization. Changes to functions applied to calls and word lookups should reduce overhead, call by the C stack, speed up everything from developing object-oriented code to accessing data dictionaries.
• Simplify work in other languages such as JavaScript on top of Python through high performance and parallel computing.
https://morioh.com/p/af7debd024e2
https://medium.com/@Sabrina-Carpenter/python-alpha-5-is-here-5-promising-features-that-will-blow-your-mind-a4abd406d0ad
Morioh
Python Alpha 5 - 5 Promising Features that will blow your mind 🤯
Python, the popular high-level programming language, has just released its Alpha 5 version (3.11). This new version comes with 5 promising new features that wil
🔥2
💫Graph visualization with PyGraphistry
PyGraphistry is a Python AI library for visual graphs that allows you to extract, transform, analyze and visualize large graphs along with end-to-end Graphistry graphics server sessions. Graphics created specifically for large graphs. The WebGL custom rendering engine client renders up to 8 million nodes + a number at a time, and most client GPUs detect between 100,000 and 2 million elements. The GPU analytics engine on the server interface supports even larger graphics. Graphics smoothes graphics workflows in the PyData ecosystem, including Pandas/Spark/Dask dataframes, Nvidia RAPIDS GPUs, GPU graphics, DGL/PyTorch graphics neural networks, and various data connectors.
PyGraphistry is a streamlined and optimized native PyData interface for language independent Graphistry REST APIs. It is possible to use PyGraphistry using Python data sources such as CSV, SQL, Neo4j, Splunk and more.
The PyGraphistry Python client uses different categories of users:
• Data Explorer: Comprehensive data exploration to accelerated visual analysis in a couple of lines, share results on time phenomena, create complex predictions in Jupyter Notebook and Google Colab.
• Developer: Quickly prototype amazing Python solutions with PyGraphistry, embed in a language-independent way with the REST API, customize colors, icons, templates, JavaScript, and more.
• Analyst: Customize visual ashboards using interactive search, filters, timelines, bar charts, and more, embedding them in any framework.
https://github.com/graphistry/pygraphistry
PyGraphistry is a Python AI library for visual graphs that allows you to extract, transform, analyze and visualize large graphs along with end-to-end Graphistry graphics server sessions. Graphics created specifically for large graphs. The WebGL custom rendering engine client renders up to 8 million nodes + a number at a time, and most client GPUs detect between 100,000 and 2 million elements. The GPU analytics engine on the server interface supports even larger graphics. Graphics smoothes graphics workflows in the PyData ecosystem, including Pandas/Spark/Dask dataframes, Nvidia RAPIDS GPUs, GPU graphics, DGL/PyTorch graphics neural networks, and various data connectors.
PyGraphistry is a streamlined and optimized native PyData interface for language independent Graphistry REST APIs. It is possible to use PyGraphistry using Python data sources such as CSV, SQL, Neo4j, Splunk and more.
The PyGraphistry Python client uses different categories of users:
• Data Explorer: Comprehensive data exploration to accelerated visual analysis in a couple of lines, share results on time phenomena, create complex predictions in Jupyter Notebook and Google Colab.
• Developer: Quickly prototype amazing Python solutions with PyGraphistry, embed in a language-independent way with the REST API, customize colors, icons, templates, JavaScript, and more.
• Analyst: Customize visual ashboards using interactive search, filters, timelines, bar charts, and more, embedding them in any framework.
https://github.com/graphistry/pygraphistry
GitHub
GitHub - graphistry/pygraphistry: PyGraphistry is a Python library to quickly load, shape, embed, and explore big graphs with the…
PyGraphistry is a Python library to quickly load, shape, embed, and explore big graphs with the GPU-accelerated Graphistry visual graph analyzer - graphistry/pygraphistry
🔥3
#test
Key difference between window and aggregate functions is
Key difference between window and aggregate functions is
Anonymous Quiz
5%
Aggregate functions operate on a set of values to return a range of values
3%
They are the same, but the window functions are more difficult to write
58%
Window functions operate on a set of values to return a range of values
34%
Aggregate functions return single value for each row from the underlying query
🔥3